Unit 1 DIGITAL IMAGE FUNDAMENTALS

Similar documents
STUDY NOTES UNIT I IMAGE PERCEPTION AND SAMPLING. Elements of Digital Image Processing Systems. Elements of Visual Perception structure of human eye

EC-433 Digital Image Processing

Digital Image Processing

Chapter 2: Digital Image Fundamentals. Digital image processing is based on. Mathematical and probabilistic models Human intuition and analysis

Human Visual System. Prof. George Wolberg Dept. of Computer Science City College of New York

Introduction to Visual Perception & the EM Spectrum

Review. Introduction to Visual Perception & the EM Spectrum. Overview (1):

DIGITAL IMAGE PROCESSING LECTURE # 4 DIGITAL IMAGE FUNDAMENTALS-I

Lecture 2 Digital Image Fundamentals. Lin ZHANG, PhD School of Software Engineering Tongji University Fall 2016

Visual Perception of Images

Digital Image Processing

EYE STRUCTURE AND FUNCTION

Vision. The eye. Image formation. Eye defects & corrective lenses. Visual acuity. Colour vision. Lecture 3.5

Science 8 Unit 2 Pack:

III: Vision. Objectives:

The Special Senses: Vision

Digital Image Processing

Digital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye

Digital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye

Image Processing - Intro. Tamás Szirányi

Digital Image Processing

Vision Science I Exam 1 23 September ) The plot to the right shows the spectrum of a light source. Which of the following sources is this

LIGHT AND LIGHTING FUNDAMENTALS. Prepared by Engr. John Paul Timola

Fig Color spectrum seen by passing white light through a prism.

The Human Eye and a Camera 12.1

For a long time I limited myself to one color as a form of discipline. Pablo Picasso. Color Image Processing

Vision. Biological vision and image processing

10/8/ dpt. n 21 = n n' r D = The electromagnetic spectrum. A few words about light. BÓDIS Emőke 02 October Optical Imaging in the Eye

Life Science Chapter 2 Study Guide

11/23/11. A few words about light nm The electromagnetic spectrum. BÓDIS Emőke 22 November Schematic structure of the eye

Visual Optics. Visual Optics - Introduction

EYE ANATOMY. Multimedia Health Education. Disclaimer

DIGITAL IMAGE PROCESSING (COM-3371) Week 2 - January 14, 2002

Chapter 6 Human Vision

SCIENCE 8 WORKBOOK Chapter 6 Human Vision Ms. Jamieson 2018 This workbook belongs to:

The human visual system

It allows wide range of algorithms to be applied to the input data. It avoids noise and signals distortion problems.

Photography (cont d)

Early Visual Processing: Receptive Fields & Retinal Processing (Chapter 2, part 2)

Fundamentals. Preview 2.1. Elements of Visual Perception. Those who wish to succeed must ask the right preliminary questions.

SCIENCE 8 WORKBOOK Chapter 6 Human Vision Ms. Jamieson 2018 This workbook belongs to:

Sensory receptors External internal stimulus change detectable energy transduce action potential different strengths different frequencies

Further reading. 1. Visual perception. Restricting the light. Forming an image. Angel, section 1.4

CPSC 4040/6040 Computer Graphics Images. Joshua Levine

Eye. Eye Major structural layer of the wall of the eye is a thick layer of dense C.T.; that layer has two parts:

Digital Image Processing (DIP)

ELECTROMAGNETIC WAVES AND LIGHT. Physics 5 th Six Weeks

Reading. 1. Visual perception. Outline. Forming an image. Optional: Glassner, Principles of Digital Image Synthesis, sections

PHYSICS. Chapter 35 Lecture FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E RANDALL D. KNIGHT

E X P E R I M E N T 12

Human Visual System. Digital Image Processing. Digital Image Fundamentals. Structure Of The Human Eye. Blind-Spot Experiment.

Digital Image Processing

Section 1: Sound. Sound and Light Section 1

Visual System I Eye and Retina

Digital Image Processing

COURSE ECE-411 IMAGE PROCESSING. Er. DEEPAK SHARMA Asstt. Prof., ECE department. MMEC, MM University, Mullana.

Name: Date: Block: Light Unit Study Guide Matching Match the correct definition to each term. 1. Waves

The Human Visual System. Lecture 1. The Human Visual System. The Human Eye. The Human Retina. cones. rods. horizontal. bipolar. amacrine.

Image and Multidimensional Signal Processing

The Human Brain and Senses: Memory

Digital Image Processing COSC 6380/4393. Lecture 20 Oct 25 th, 2018 Pranav Mantini

AP PSYCH Unit 4.2 Vision 1. How does the eye transform light energy into neural messages? 2. How does the brain process visual information? 3.

Chapter Human Vision

November 14, 2017 Vision: photoreceptor cells in eye 3 grps of accessory organs 1-eyebrows, eyelids, & eyelashes 2- lacrimal apparatus:

Yokohama City University lecture INTRODUCTION TO HUMAN VISION Presentation notes 7/10/14

Slide 4 Now we have the same components that we find in our eye. The analogy is made clear in this slide. Slide 5 Important structures in the eye

OPTICAL SYSTEMS OBJECTIVES

THE EYE. People of Asian descent have an EPICANTHIC FOLD in the upper eyelid; no functional difference.

sclera pupil What happens to light that enters the eye?

Handout G: The Eye and How We See

Digital Image Processing. Lecture # 8 Color Processing

Visual Perception. Readings and References. Forming an image. Pinhole camera. Readings. Other References. CSE 457, Autumn 2004 Computer Graphics

EYE. The eye is an extension of the brain

Instructional Resources/Materials: Light vocabulary cards printed (class set) Enough for each student (See card sort below)

COLOR and the human response to light

Physical Science Physics

Vision and Color. Reading. Optics, cont d. Lenses. d d f. Brian Curless CSE 557 Autumn Good resources:

Vision and Color. Brian Curless CSE 557 Autumn 2015

Why is blue tinted backlight better?

Psych 333, Winter 2008, Instructor Boynton, Exam 1

Biology 70 Slides for Lecture 1 Fall 2007

Vision and Color. Reading. The lensmaker s formula. Lenses. Brian Curless CSEP 557 Autumn Good resources:

Visual Perception. Overview. The Eye. Information Processing by Human Observer

Capturing Light in man and machine. Some figures from Steve Seitz, Steve Palmer, Paul Debevec, and Gonzalez et al.

Sensation. What is Sensation, Perception, and Cognition. All sensory systems operate the same, they only use different mechanisms

Sensation. Sensation. Perception. What is Sensation, Perception, and Cognition

Spatial Vision: Primary Visual Cortex (Chapter 3, part 1)

3. Butter paper is an example for object. (A) a transparent (B) a translucent (C) an opaque (D) a luminous

Color Image Processing. Jen-Chang Liu, Spring 2006

Vision and Color. Reading. Optics, cont d. Lenses. d d f. Brian Curless CSEP 557 Fall Good resources:

Vision and Color. Brian Curless CSEP 557 Fall 2016

Background. Computer Vision & Digital Image Processing. Improved Bartlane transmitted image. Example Bartlane transmitted image

Reading. Lenses, cont d. Lenses. Vision and color. d d f. Good resources: Glassner, Principles of Digital Image Synthesis, pp

CS 565 Computer Vision. Nazar Khan PUCIT Lecture 4: Colour

Chapter 2: The Beginnings of Perception

CPSC 425: Computer Vision

ECC419 IMAGE PROCESSING

Seeing and Perception. External features of the Eye

Mastery. Chapter Content. What is light? CHAPTER 11 LESSON 1 C A

PHGY Physiology. The Process of Vision. SENSORY PHYSIOLOGY Vision. Martin Paré. Visible Light. Ocular Anatomy. Ocular Anatomy.

Transcription:

Unit 1 DIGITAL IMAGE FUNDAMENTALS What Is Digital Image? An image may be defined as a two-dimensional function, f(x, y), where x and y are spatial (plane) coordinates, and the amplitude of f at any pair of coordinates (x, y) is called the intensity or gray level of the image at that point. Fig: Coordinate convention used to represent digital images Fig: Zoomed image, where small white boxes inside the image represent pixels Digital image is composed of a finite number of elements referred to as picture elements, image elements, pels, and pixels. Pixel is the term most widely used to denote the elements of a digital image.

We can represent M*N digital image as compact matrix as shown in fig below When x, y, and the amplitude values of f are all finite, discrete quantities, we call the image a digital image. The field of digital image processing refers to processing digital images by means of a digital computer. Advantages of Digital Images The processing of images is faster and cost effective. Digital images can be effectively stored and efficiently transmitted from one place to another. Whenever the image is in digital format, the reproduction of the image is both faster and cheaper. When shooting a digital image, one can immediately see if the image is good or not. Drawbacks of digital Images A digital file cannot be enlarged beyond a certain size without compromising on quality The memory required to store and process good quality images is very high.

Fundamental Steps in Digital Image Processing Fig 1.1: Steps involved in an Digital Image Processing Image acquisition is the creation of digital images, typically from a physical scene. The most usual method is by digital photography with a digital camera. Generally, the image acquisition stage involves preprocessing, such as scaling. Image enhancement: Basically, the idea behind enhancement techniques is to bring detail that is obscured(unclear), or simply to highlight certain features of interest in an image. A familiar example of enhancement is when we increase the contrast of an image because it looks better. It is important to keep in mind that enhancement is a very subjective (Personal opinion) area of image processing.

Image restoration is an area that also deals with improving the appearance of an image. However, unlike enhancement, which is subjective, image restoration is objective, in the sense that restoration techniques tend to be based on mathematical or probabilistic models of image degradation. Enhancement, on the other hand, is based on human subjective preferences regarding what constitutes a good enhancement result. Image restoration is the operation of taking a corrupted/noisy image and we to try to remove the noise content, such that output will be same as original image. In Image enhancement we are not dealing with noisy image. We take a low contrast image and try to enhance in order to make it look better. Color image processing is an area that has been gaining in importance because of the significant increase in the use of digital images over the Internet. Wavelets are the fo undat io n for r epresent ing images in various degrees of resolution. In particular, this material is used in this book for image data compression and for pyramidal representation, in which images are subdivided successively into smaller regions. Compression as t h e n a m e implies, deals with techniques for reducing the storage required to save an image, or the bandwidth required to transmit it. Morphological processing is useful for extracting image components that are useful in the representation and description of shape. Segmentation procedures partition an image into its constituent parts or objects. In general, autonomous segmentation is one of the most difficult tasks in digital image processing. Image segmentation is typically used locate objects and boundaries (lines, curves etc) in image. Representation and description there are two types of data representation. (i) Boundary representation (ii) Regional representation. Boundary representation is appropriate when the focus is on external shape characteristics, (eg) faces, corners. Regional representation is appropriate when the focus is on internal properties, such as texture or skeletal shape Description, also called feature selection, deals with extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another. Recognition is the process that assigns a label (e.g., vehicle ) to an object based on its descriptors. Knowledge Base: In a d d i t i o n to guiding the operation of each processing module, the knowledge base also controls the interaction between modules

Components of an Image Processing System Fig 1.2: Components involved in IP Sensors produce an electrical output proportional to light intensity. With reference to sensing, two elements are required to acquire digital images. The first is a physical device(sensor) that is sensitive to the energy radiated by the object we wish to image. The second, called a digitizer, is a device for converting the output of the physical sensing device into digital form. For instance, in a digital video camera, the sensors produce an electrical output proportional to light intensity. The digitizer converts these outputs to digital data. Specialized image processing hardware usually consists of the digitizer, plus hardware that performs other primitive operations, such as an a r i t h m e t i c logic u n i t (ALU). One example of how an ALU is used is in averaging images as quickly as they are digitized, for the purpose of noise reduction. This type of hardware sometimes is called a front-end subsystem.in other words, this unit performs functions that require fast data throughputs (e.g., digitizing and averaging video images at 30 frames/s) that the typical main computer cannot handle. The computer in an image processing system is a general-purpose computer and can range from a PC to a supercomputer. In dedicated applications, sometimes specially designed computers are used to achieve a required level of performance, but our interest here is on general-purpose image processing systems. In these systems, almost any well-equipped PCtype machine is suitable for offline image processing tasks.

Software for image processing consists of specialized modules that perform specific tasks. A well-designed package also includes the capability for the user to write code. Mass storage capability is a must in image processing applications. An image of size 1024*1024 pixels, in which the intensity of each pixel is an 8-bit quantity, requires one megabyte of storage space if the image is not compressed. Digital storage for image processing applications falls into three principal categories: (1) short term storage for use during processing, (2) on-line storage for relatively fast recall, and (3) archival storage, characterized by infrequent access. Storage is measured in bytes (eight bits), Kbytes (one thousand bytes), Mbytes (one million bytes), Gbytes (meaning giga, or one billion, bytes), and T bytes (meaning tera, or one trillion, bytes). One method of providing short-term storage is computer memory. Another is by specialized boards, called frame buffers, that store one or more images and can be accessed rapidly, usually at video rates (e.g., at 30 complete images per second. Online storage generally takes the form of magnetic disks or optical-media storage. Image displays in use today are mainly color (preferably flat screen) TV monitors Hardcopy devices for recording images include laser printers, inkjet units. But paper is the obvious medium of choice for written material. Networking means exchange of information or services (eg through internet) among individuals, groups, or institutions. Networking is almost a default function in any computer system in use today. Because of the large amount of data inherent in image processing applications, the key consideration in image transmission is bandwidth.

Figure 1: Cross section of a human eye Human Eye In Fig 1 is shown a cross-section of human eye. The main elements of the eye are as follows: The eye ball The eye ball is approximately spherical, with the vertical measure of it being approximately 24 mm, slightly lesser than the horizontal width. The field of view covers 1600(width) 1350 height area. Anterior of the eye has the outer coating cornea while the posterior has the outer layer of sclera.

Cornea The cornea is a transparent, curved, refractive window through which the light enters the eye. This segment (typically 8 mm in radius) is linked to the larger unit, the sclera, which extends and covers the posterior portion of the optic globe. The cornea and sclera are connected by a ring called the limbus. Iris, pupil The pupil is the opening at the center of the iris. It controls the amount of light entering the eye ball. Its diameter varies from 1 to 8 mm in response to illumination changes. In low light conditions it dilates to increase the amount of light reaching the retina. Behind the pupil is the lens of the eye. Lens The lens is suspended to the ciliary body by the suspensory ligament, made up of fine transparent fibers. The lens is transparent (has 70% water) and absorbs approximately 8% of the visible light spectrum. The protein in the lens absorbs the harmful infrared and ultraviolet light and prevents damage to the eye. Choroid Situated beneath the sclera this membrane contains blood vessels that nourish the cells in the eye. Like the iris, it is pigmented to prevent light from entering the eye from any other direction other than the pupil. Retina Fovea Beneath the choroid lies the retina, the innermost membrane of the eye where the light entering the eye is sensed by the receptor cells. The retina has 2 types of photoreceptor cells rods and cones. These receptor cells respond to light in the 330 to 730 nm wavelength range. The central portion of the retina at the posterior part is the fovea. It is about 1.5 mm in diameter. Rods There about 100 million rods in the eye they help in dim-light (scotopic) vision. Their spatial distribution is radially symmetric about the fovea, but varies across the retina. They are distributed over a larger area in the retina. The rods are extremely sensitive and can respond even to a single photon.

However they are not involved in color vision. They cannot resolve fine spatial detail despite high number because many rods are connected to a single nerve. Cones There are about 6 million cones in the eye. The cones help in the brightlight (photopic) vision. These are highly sensitive to color. They are located primarily in the fovea where the image is focused by the lens. Each cone cell is connected to its separate nerve ending. Hence they have the ability to resolve fine details. Blind Spot Though the photo-receptors are distributed in radially symmetric manner about the fovea, there is a region near the fovea where there are no receptors. This region is called as the blind spot. This is the region where the optic nerve emerges from the eye. Light falling on this region cannot be sensed. Image formation in the eye The focal length (distance between the center lens and the retina) of the lens varies between 14 mm and 17 mm. To focus on distant objects, the controlling muscles cause the lens to be relatively flattened. Similarly, these muscles allow the lens to become thicker in order to focus on objects near the eye. An inverted image of the object is formed on the fovea region of the retina.

In above figure the observer is looking at a tree 72.5 m high at a distance of 100m. If h is the height in mm of that object in the retinal image, it is easy to calculate the size of the retinal image of any object. 15/100=h/17 or h=2.55mm Brightness Adaption and Discrimination The human eye can adapt to a wide range ( 10 10 ) of intensity levels. The brightness that we perceive (subjective brightness) is not a simple function of the intensity. In fact the subjective brightness is a logarithmic function of the light intensity incident on the eye. The HVS(Human Visual System) mechanisms adapt to different lighting conditions. The sensitivity level for a given lighting condition is called as the brightness adaption level. As the lighting condition changes, our visual sensory mechanism will adapt by changing its sensitivity. The human eye cannot respond to the entire range of intensity levels at a given level of sensitivity. Example If we stand in a brightly lit area we cannot discern details in a dark area since it will appear totally dark. Our photo-receptors cannot respond to the low level of intensity because the level of sensitivity has been adapted to the bright light. However a few minutes after moving into the dark room, our eyes would adapt to the required sensitivity level and we would be able to see in the dark area. This shows that though our visual system can respond to a wide dynamic range, it is possible only by adapting to different lighting conditions. At a given point of time our eye can respond well to only particular brightness levels. The response of the visual system can be characterized with respect to a particular brightness adaption level. How many different intensities can we see at a given brightness adaption level? At a given brightness adaption level, a typical human observer can discern between 1 to 2 dozen different intensity changes. If a person is looking at some point on a grayscale image (monochrome image), he would be able to discern about 1 to 2 dozen intensity levels. However, as the eyes are moved to look at some other point on the image, the brightness adaption level would change, and a different set of intensity levels will now become discernable. Hence at a given adaption level the eye cannot discriminate between too many intensity levels, but by varying the adaption level the eye is capable of discriminating a much broader range of intensity levels.

Fig: Basic experimental setup used to characterize brightness discrimination Example (only for your understanding) If you lift up and hold a weight of 2.0 kg, you will notice that it takes some effort. If you add to this weight another 0.05 kg and lift, you may not notice any difference between the apparent or subjective weight between the 2.0 kg and the 2.1 kg weights. If you keep adding weight, you may find that you will only notice the difference when the additional weight is equal to 0.2 kg. The increment threshold for detecting the difference from a 2.0 kg weight is 0.2 kg. The just noticeable difference is 0.2 kg. For the weight of magnitude, I, of 2.0 kg, the increment threshold for detecting a difference was a I (pronounces, delta I) of 0.2 kg. Example (which you have to write in exam): Further, the discriminability of the eye also changes with the brightness adaption level. Consider a opaque glass, that is illuminated from behind by a light source whose intensity I, can be varied. To this field is added an increment of illumination, in the form of a short duration flash that appears as a circle at the center of the uniformly illuminated field. If I is not bright enough, the subject says no indicating no perceivable change. As gets stronger, the subject may give a positive response of yes indicating a perceived change. The ratio is called as the weber ratio. Fig: Typical weber ratio as a function of intensity

A plot of log, as a function of log I has the general shape shown in above fig. This shows brightness discrimination is poor at low levels of illumination, and improves significantly as background illumination increases. Mach-band effect The Mach-band effect is an optical illusion as shown in Fig 2. The image shown consists of two regions one towards the left and one towards the right which are of uniform intensity. At the middle there is strip on which the intensity changes uniformly from the intensity level on the left side to the intensity level on the right side. If we observe carefully we notice a dark band immediately to the right of the middle strip and a light band immediately to the left of the middle strip. Actually the dark (or light) band has the same intensity level as the right (or left) part of the image, but still we perceive it darker than that. This is the Mach-band illusion. It happens because as we look at a boundary between two intensity levels, the eye changes its adaption level and so we perceive the same intensity differently. Simultaneous contrast Figure 2: Mach Band effect The perceived brightness of a region does not depend on the intensity of the region, but on the context (background or surrounding s) on which it is seen. All the center squares have exactly same intensity. However, they appear to the eye to become darker as the background gets lighter.

3. Light Fig: The electromagnetic spectrum. The light as we see it illuminating the objects is a very small portion of the electromagnetic spectrum. This is the visible color spectrum which can be sensed by the human eye. Its wavelength spans between 0.43 mm for violet to 0.79 mm for red. The wavelengths outside this range correspond to radiations which cannot be sensed by human eye. For example, the ultraviolet rays, the X-rays and the Gamma rays have progressively shorter wavelengths, and on the other hand, infrared rays, microwaves, and radio waves have progressively larger wavelengths. The color that we perceive for an object is basically that of the light reflected from the object. Light which gets perceived as gray shades from black to white is called as monochromatic or achromatic light (without color). Light which gets perceived as colored is called as chromatic light. Important terms which characterize a chromatic light source are: Radiance Luminance Brightness The total amount of energy that flows from the light source. Measured in watts. It measures the amount of energy an observer perceives from a light source. Measured in lumens. Indicates how a subject perceives the light in a sense similar to that of achromatic intensity.