Lecture Topic: Image, Imaging, Image Capturing

Size: px
Start display at page:

Download "Lecture Topic: Image, Imaging, Image Capturing"

Transcription

1 1 Topic: Image, Imaging, Image Capturing Lecture Keywords: Image, signal, horizontal, vertical, Human Eye, Retina, Lens, Sensor, Analog, Digital, Imaging, camera, strip, Photons, Silver Halide, CCD, Aperture, Shutter What is an Image? An image is nothing more than a two dimensional signal. It is defined by the mathematical function f(x,y) where x and y are the two co-ordinates horizontally and vertically. The value of f(x,y) at any point is gives the pixel value at that point of an image. This image is nothing but a two dimensional array of numbers ranging between 0 and 255. How human eye works? The basic principle that is followed by the cameras has been taken from the way, the human eye works.

2 2 When light falls upon the particular object, it is reflected back after striking through the object. The rays of light when passed through the lens of eye, form a particular angle, and the image is formed on the retina which is the back side of the wall. The image that is formed is inverted. This image is then interpreted by the brain and that makes us able to understand things. Due to angle formation, we are able to perceive the height and depth of the object we are seeing. This has been more explained in the tutorial of perspective transformation. when sun light falls on the object (in this case the object is a face), it is reflected back and different rays form different angle when they are passed through the lens and an invert image of the object has been formed on the back wall. The last portion of the figure denotes that the object has been interpreted by the brain and re-inverted. How a digital image is formed? Capturing an image from a camera is a physical process. The sunlight is used as a source of energy. A sensor array is used for the acquisition of the image. When the sunlight falls upon the object, then the amount of light reflected by that object is sensed by the sensors, and a continuous voltage signal is generated by the amount of sensed data. In order to create a digital image, we need to convert this data into a digital form. The result of Analog to Digital conversion process known as sampling and quantization results in an two dimensional array or matrix of numbers which are nothing but a digital image. What is Imaging and what are its applications? Imaging is the representation or reproduction of an object's form; especially a visual representation (i.e., the formation of an image). Digital imaging, creating digital images, generally by scanning or through digital photography Disk image, a file which contains the exact content of a data storage medium Document imaging, replicating documents commonly used in business Geophysical imaging Industrial process imaging Medical imaging, creating images of the human body or parts of it, to diagnose or examine disease Medical optical imaging - Magnetic resonance imaging etc

3 3 Image formation on analog cameras The image formation is due to the chemical reaction that takes place on the strip that is used for image formation. A 35mm strip is used in analog camera. This strip is coated with silver halide ( a chemical substance). Light is nothing but just the small particles known as photon particles. When these photon particles are passed through the camera, it reacts with the silver halide particles on the strip and it results in the silver which is the negative of the image. Photons (light particles) + silver halide -> silver -> image negative. Image formation involves many other concepts regarding the passing of light inside, and the concepts of shutter and shutter speed and aperture and its opening Image formation on digital cameras In the digital cameras, the image formation is not due to the chemical reaction that take place in analog cameras. In the digital camera, a CCD array of sensors is used for the image formation. CCD stands for charge-coupled device. It is an image sensor, and like other sensors it senses the values and converts them into an electric signal. In case of CCD it senses the image and convert it into electric signal etc.

4 4 This CCD is actually in the shape of array or a rectangular grid. It is like a matrix with each cell in the matrix contains a censor that senses the intensity of photon. When light falls on the object, the light reflects back after striking the object and allowed to enter inside the camera. Each sensor of the CCD array itself is an analog sensor. When photons of light strike on the chip, it is held as a small electrical charge in each photo sensor. The response of each sensor is directly equal to the amount of light or (photon) energy stroked on the surface of the sensor. An image is defined as a two dimensional signal and due to the two dimensional formation of the CCD array, a complete image can be achieved from this CCD array. It has limited number of sensors, and it means a limited detail can be captured by it. Also each sensor can have only one value against the each photon particle that strike on it. So the number of photons striking (current) are counted and stored. In order to measure accurately these, external CMOS sensors are also attached with CCD array. The value of each sensor of the CCD array refers to each the value of the individual pixel. The number of sensors = number of pixels. It also means that each sensor could have only one and only one value.

5 5 The charges stored by the CCD array are converted to voltage one pixel at a time. With the help of additional circuits, this voltage is converted into a digital information and then it is stored in memory. The quality of the image captured also depends on the type and quality of the CCD array that has been used. Aperture Aperture is a small opening which allows the light to travel inside into camera. There are some small blades like stuff inside the aperture. These blades create a octagonal shape that can be opened closed. And thus it make sense that, the more blades will open, the hole from which the light would have to pass would be bigger. The bigger the hole, the more light is allowed to enter. The effect of the aperture directly corresponds to brightness and darkness of an image. If the aperture opening is wide, it would allow more light to pass into the camera. More light would result in more photons, which ultimately result in a brighter image. The one on the right side looks brighter; it means that when it was captured by the camera, the aperture was wide open. As compare to the other picture on the left side, which is very dark as

6 6 compare to the first one, that shows that when that image was captured, its aperture was not wide open. Size The size of the aperture is denoted by a f value. And it is inversely proportional to the opening of aperture. Here are the two equations that best explain this concept. Large aperture size = Small f value Small aperture size = Greater f value Pictorially it can be represented as: Shutter After the aperture, there comes the shutter. The light when allowed to pass from the aperture, falls directly on to the shutter. Shutter is actually a cover, a closed window, or can be thought of as a curtain. Shutter is the only thing that is between the image formation and the light, when it is passed from aperture. As soon as the shutter is open, light falls on the image sensor, and the image is formed on the array. If the shutter allows light to pass a bit longer, the image would be brighter. Similarly a darker picture is produced, when a shutter is allowed to move very quickly and hence, the light that is allowed to pass has very less photons, and the image that is formed on the CCD array sensor is very dark. Shutter speed The shutter speed can be referred to as the number of times the shutter get open or close. (We are not talking about for how long the shutter get open or close.)

7 7 Shutter time The shutter time can be defined as when the shutter is open, then the amount of wait time it takes till it is closed is called shutter time. (In this case we are not talking about how many times, the shutter got open or close, but we are talking about for how much time it remains wide open.) Example: We can better understand these two concepts in this way. That lets say that a shutter opens 15 times and then get closed, and for each time it opens for 1 second and then get closed. In this example, 15 is the shutter speed and 1 second is the shutter time. The relationship between shutter speed and shutter time is that they are both inversely proportional to each other. This relationship can be defined in the equation below. More shutter speed = less shutter time. Less shutter speed = more shutter time. The lesser the time required, the more is the speed. And the greater the time required, the less is the speed. Capturing Image of Fast moving objects: If we were to capture the image of a fast moving object, could be a car or anything. The adjustment of shutter speed and its time would affect a lot. So, in order to capture an image like this, we will make two amendments: Increase shutter speed Decrease shutter time When we increase shutter speed, the more number of times, the shutter would open or close. It means different samples of light would allow to pass in. And when we decrease shutter time, it means we will immediately captures the scene, and close the shutter gate. We get a crisp image of a fast moving object.

8 8 Example: suppose we want to capture the image of fast moving water fall. shutter speed at 1 second shutter speed to a faster range shutter speed to even more faster shutter get opened or closed in 200th of 1 second and so we got a crisp image. Active and Passive Imaging: When sunlight is fallen on the objects and reflected back to sensor array and image is captured, we call it Passive Imaging. When no external illumination or external light source is available like sunlight or electric bulb, then some sort of artificially generated illumination is necessary to capture image. This is known as Active imaging. For example, X-ray is passed through the human body to capture image of inside it. Questions: Short: 1. What is an image? 2. What is a signal? 3. What is imaging? 4. What is CCD? 5. What is shutter time and shutter speed? Broad: 1. How does human eye work to capture image? 2. How does digital image formed? 3. What are the applications of imaging? 4. Describe image formation using analog camera.

9 9 5. Differentiate active and passive imaging. Critical 1. How would you like to capture a fast moving object? 2. Define the relationship between shutter time and shutter speed?

10 10 Topic: Analog to Digital Conversion of Signal Lecture Keywords: Signal, Analog signal, Digital Signal, Sampling, Quantization Signals: In electrical engineering, the fundamental quantity of representing some information is called a signal. It does not matter what the information is Analog or digital information. In mathematics, a signal is a function that conveys some information. In fact any quantity measurable through time over space or any higher dimension can be taken as a signal. A signal could be of any dimension and could be of any form. Analog signals: A signal could be an analog quantity that means it is defined with respect to the time. It is a continuous signal. These signals are defined over continuous independent variables. They carry a huge number of values. They are very much accurate due to a large sample of values. In order to store these signals, we require an infinite memory because it can achieve infinite values on a real line. Analog signals are denoted by sin waves. Human voice: Human voice is an example of analog signals. When we speak, the voice that is produced travel through air in the form of pressure waves and thus belongs to a mathematical function, having independent variables of space and time and a value corresponding to air pressure Another example is of sin wave which is shown in the figure below. Y = sin(x) where x is independent

11 11 Since we are dealing with signals, so in our case, our system would be a mathematical model, a piece of code/software, or a physical device, or a black box whose input is a signal and it performs some processing on that signal, and the output is a signal. The input is known as excitation and the output is known as response. Conversion of analog to digital signals: There are two main concepts that are involved in the conversion. Sampling Quantization Sampling Sampling as its name suggests can be defined as taking samples. Taking samples of a digital signal over x axis. Sampling is done on an independent variable.

12 12 Sampling is done on the x variable. We can also say that the conversion of x axis (infinite values) to digital is done under sampling. Sampling is further divide into up sampling and down sampling. If the range of values on x-axis is less then we will increase the sample of values. This is known as up sampling and its vice versa is known as down sampling. Quantization Quantization as its name suggest can be defined as dividing into quanta (partitions). Quantization is done on dependent variable. It is opposite to sampling. In case of this mathematical equation: y = sin(x) Quantization is done on the Y variable. It is done on the y axis. The conversion of y axis infinite values to 1, 0, -1 (or any other level) is known as Quantization. Why do we need to convert an analog signal to digital signal? The first and obvious reason is that digital image processing deals with digital images, that are digital signals. So when ever the image is captured, it is converted into digital format and then it is processed.

13 13 The second and important reason is, that in order to perform operations on an analog signal with a digital computer, you have to store that analog signal in the computer. And in order to store an analog signal, infinite memory is required to store it. And since that is not possible, so that is why we convert that signal into digital format and then store it in digital computer and then performs operations on it. Questions: Short: 1. What is analog signal? 2. What is digital signal? 3. What is sampling? 4. What is quantization? Broad: 1. Discuss the process of conversion of an analog signal into digital form?

14 14 Lecture Topic: Image storing, Image presentation and File Formats Keywords: Pixel, Resolution, Gray Scale, Color Image, Binary Image, Bits per pixel, Image Format, RGB Pixel Pixel is the smallest element of an image. Each pixel corresponds to any one value. In an 8-bit gray scale image, the value of the pixel between 0 and 255. The value of a pixel at any point corresponds to the intensity of the light photons striking at that point. Each pixel stores a value proportional to the light intensity at that particular location. There may be thousands of pixels that together make up this image. Zooming the image to the extent that we are able to see some pixels division.

15 15 A pixel can also be defined as-the smallest division the CCD array is also known as pixel. Each division of CCD array contains the value against the intensity of the photon striking to it. This value can also be called as a pixel. Calculation of total number of pixels The total number of Pixels in an image will be equal to the number of rows multiply with number of columns. Total number of pixels = number of rows ( X ) number of columns The number of (x,y) coordinate pairs make up the total number of pixels. Gray level The value of the pixel at any point denotes the intensity of image at that location, and that is also known as gray level. Each pixel can have only one value and each value denotes the intensity of light at that point of the image. Pixel value.(0): The value 0 means absence of light. It means that 0 denotes dark, and it further means that when ever a pixel has a value of 0, it means at that point, black color would be formed. Pixel value.(255): The value 255 means CCD charged with full of amount of light energy. It means that 255 denotes full bright, and it further means that when ever a pixel has a value of 255, it means at that point, white color would be formed. Bits per pixel (BPP) denote the number of bits per pixel. The number of different colors in an image is depends on the depth of color or bits per pixel. Number of different colors: Bits per pixel Number of colors 1 BPP 2 colors 2 BPP 4 colors 3 BPP 8 colors

16 16 4 BPP 16 colors 5 BPP 32 colors 6 BPP 64 colors 7 BPP 128 colors 8 BPP 256 colors 10 BPP 1024 colors 16 BPP colors 24 BPP colors (16.7 million colors) 32 BPP colors (4294 million colors) This is a pattern of the exponential growth The famous gray scale image is of 8 bpp, means it has 256 different colors in it or 256 shades. In case of 1 BPP, 0 denotes black, and 1 denotes white. In case 8 BPP, 0 denotes black, and 255 denotes white. Gray color:

17 17 When we calculate the black and white color value, then we can calculate the pixel value of gray color. Gray color is actually the mid-point of black and white. That said, in case of 8 BPP, the pixel value that denotes gray color is 127 or 128bpp (counting from 1, not from 0). Image storage requirements: Calculation of size of an image: Image size: The size of an image depends upon three things. -Number of rows -Number of columns -Number of bits per pixel The formula for calculating the size is given below. Size of an image = rows * cols * BPP Example: Assuming it has 1024 rows and it has 1024 columns. And since it is a gray scale image, it has 256 different shades of gray or it has bits per pixel. Then putting these values in the formula, we get Size of an image = rows * cols * BPP = 1024 * 1024 * 8 = bits. Converting it into bytes = / 8 = bytes. Converting into kilo bytes = / 1024 = 1024kb. Converting into Mega bytes = 1024 / 1024 = 1 Mb. Types of Images The binary image

18 18 The binary image as it name states, contain only two pixel values 0 and 1. Here 0 refers to black color and 1 refers to white color. It is also known as Monochrome. One of the interesting thing about this binary image that there is no gray level in it. Only two colors that are black and white are found in it. Binary images have PBM ( Portable bit map) format Black and white image: Image that is formed consist of only black and white color can also be called as Black and White image. 2, 3, 4,5, 6 bit color format: The images with a color format of 2, 3, 4, 5 and 6 bit are not widely used today. They were used in old times for old TV displays, or monitor displays. But each of these colors have more then two gray levels, and hence has gray color unlike the binary image. In a 2 bit 4, in a 3 bit 8, in a 4 bit 16, in a 5 bit 32, in a 6 bit 64 different colors are present. 8 bit color format: 8 bit color format is one of the most famous image format. It has 256 different shades of colors in it. It is commonly known as Grayscale image. The range of the colors in 8 bit vary from Where 0 stands for black, and 255 stands for white, and 127 stands for gray color. This format was used initially by early models of the operating systems UNIX and the early color Macintoshes.

19 19 The format of these images are PGM ( Portable Gray Map ). This format is not supported by default from windows. In order to see gray scale image, we need to have an image viewer or image processing toolbox such as Matlab. An image is nothing but a two dimensional function, and can be represented by a two dimensional array or matrix. So in the case of the image of Einstein shown above, there would be two dimensional matrix in behind with values ranging between 0 and 255. Black and white image: 16 bit color format It has 65,536 different colors in it. It is also known as High color format. It has been used by Microsoft in their systems that support more then 8 bit color format. The distribution of color in a color image is not as simple as it was in gray scale image. The distribution of color in a color image is not as simple as it was in gray scale image. 5 bits for R, 5 bits for G, 5 bits for B Then there is one bit remains in the end. So the distribution of 16 bit has been done like this.

20 20 5 bits for R, 6 bits for G, 5 bits for B. The additional bit that was left behind is added into the green bit. Because green is the color which is most soothing to eyes in all of these three colors. Note that this is distribution is not followed by all the systems. Another distribution of 16 bit format is like this: 4 bits for R, 4 bits for G, 4 bits for B, 4 bits for alpha channel. Or some distribute it like this 5 bits for R, 5 bits for G, 5 bits for B, 1 bits for alpha channel. 24 bit color format: 24 bit color format also known as true color format; Like 16 bit color format, in a 24 bit color format, the 24 bits are again distributed in three different formats of Red, Green and Blue. Since 24 is equally divided on 8, so it has been distributed equally between three different color channels. Their distribution is like this. 8 bits for R, 8 bits for G, 8 bits for B. Behind a 24 bit image: Unlike a 8 bit gray scale image, which has one matrix behind it, a 24 bit image has three different matrices of R, G, B.

21 21 Format: It is the most common used format. Its format is PPM (Portable pixmap) which is supported by Linux operating system. The famous Windows has its own format for it which is BMP (Bitmap). Questions: Short: 1. Define pixel. 2. How many types of image? What are those? 3. What are the three basic colors used in an color image? Broad: 1. Discuss the following file format: (a) PPM (b) 8 bit color format (c) 24 bit color format Critical: 1. How does total number of pixel in an image calculated? 2. How does size of an image calculated?

22 22 Lecture 07 Topic: Color Theory Keywords: color mixing, additive, subtractive Color theory: Color theory principles first appeared in the writings of Leone Battista Alberti (c.1435) The notebooks of Leonardo da Vinci (c.1490), a tradition of "colory theory" began the color principles in the 18th century, In the visual arts, color theory is a body of practical guidance to color mixing and the visual effects of a specific color combination. There are also definitions (or categories) of colors based on the color wheel: primary color, secondary color and tertiary color. Color abstractions: Additive Color mixing Subtractive color mixing Questions: Short: 1. Who invented color principle? 2. What does it mean by color theory in visual art? 3. What are the categories of colors based on color wheel? Broad: 1. Discuss color theory. 2. Draw the diagram of additive color mixing and subtractive color mixing principles.

23 23 Lecture: Topic: File Format Keywords: PDF, JPEG, TIFF, Netpbm, PPM, PBM, PGM, ASCII, Binary, Magic number What's a file format? A file format is the structure of how information is stored (encoded) in a computer file. File formats are designed to store specific types of information, such as JPEG and TIFF for image or raster data, AI (Adobe Illustrator) for vector data, or PDF for document exchange. Netpbm format: A Netpbm format is any graphics format used and defined by the Netpbm project. The portable pixmap format (PPM), the portable graymap format (PGM) and the portable bitmap format (PBM) are image file format designed to be easily exchanged between platforms. They are also sometimes referred to collectively as the portable anymap format (PNM) Each file starts with a two-byte magic number (in ASCII) that identifies the type of file it is (PBM, PGM, and PPM) and its encoding (ASCII or binary). The magic number is a capital P followed by a single-digit number. Type Magic number Extension Colors ASCII Binary Portable BitMap P1 P4.pbm 0 1 (white & black) Portable GrayMap P2 P5.pgm (gray scale) Portable PixMap P3 P6.ppm (RGB) A value of P7 refers to the PAM file format that is covered as well by the netpbm library The ASCII formats allow for human readability and easy transfer to other platforms; the binary formats are more efficient in file size but may have native byte-order issues. In the binary formats, PBM uses 1 bit per pixel, PGM uses 8 bits per pixel, and PPM uses 24 bits per pixel: 8 for red, 8 for green, 8 for blue.

24 24 PBM example P1 # This is an example bitmap of the letter "J" there is a newline character at the end of each line The string P1 identifies the file format The number sign introduces a comment. The next two numbers give the width and the height. Then follows the matrix with the pixel values (in the monochrome case here, only zeros and ones). Here is the resulting image 20 times magnified Note that a 0 signifies a white pixel, and a 1 signifies a black pixel. This is in contrast to the other formats, where higher values signify brighter pixels. The P4 binary format of the same image represents each pixel with a single bit, packing 8 pixels per byte, with the first pixel as the most significant bit. Extra bits are added at the end of each row to fill a whole byte.

25 25 PGM example The PGM and PPM formats (both ASCII and binary versions) have an additional parameter for the maximum value (numbers of grey between black and white) after the X and Y dimensions and before the actual pixel data. Black is 0 and max value is white. There is a newline character at the end of each line. P2 # Shows the word "FEEP" (example from Netpbm man page on PGM) PPM example P # The part above is the header # "P3" means this is a RGB color image in ASCII # "3 2" is the width and height of the image in pixels # "255" is the maximum value for each color # The part below is image data: RGB triplets

26 26 The P6 binary format of the same image represents each color component of each pixel with one byte (thus three bytes per pixel) in the order red, green, then blue. The file is smaller, but the color information is difficult to read by humans. The image shown above using only 0 or the maximal value for the red-green-blue channels can be also encoded as: P3 # The same image with width 3 and height 2, # using 0 or 1 per color (red, green, blue) PNG: PNG stands for Portable Network Graphics PNG is more efficient than PPM format This is a raster graphics file format that supports lossless data compression PNG was created as an improved, non-patented replacement for Graphics Interchange Format (GIF) File header A PNG file starts with an 8-byte signature Values Purpose 89 Has the high bit set to detect transmission systems that do not support 8 bit data and to reduce the chance that a text file is mistakenly interpreted as a PNG, or vice versa. 50 4E 47 In ASCII, the letters PNG, allowing a person to identify the format easily if it is viewed in a text editor. 0D 0A A DOS-style line ending (CRLF) to detect DOS-Unix line ending conversion of the data. 1A A byte that stops display of the file under DOS when the command type has been used the end-of-file character.

27 27 0A A Unix-style line ending (LF) to detect Unix-DOS line ending conversion. "Chunks" within the file After the header comes a series of chunks, each of which conveys certain information about the image. Chunks declare themselves as critical or ancillary, and a program encountering an ancillary chunk A chunk consists of four parts: length (4 bytes, big-endian), chunk type/name (4 bytes), chunk data (length bytes) and CRC (cyclic redundancy code/checksum; 4 bytes). The CRC is a network-byte-order CRC-32 computed over the chunk type and chunk data, but not the length. Length Chunk type Chunk data CRC 4 bytes 4 bytes Length bytes 4 bytes Pixel format Pixels in PNG images are numbers that may be either indices of sample data in the palette or the sample data itself. Sample data for a single pixel consists of a tuple of between one and four numbers. Whether the pixel data represents palette indices or explicit sample values, the numbers are referred to as channels and every number in the image is encoded with an identical format. The permitted formats encode each number as an unsigned integral value using a fixed number of bits, referred to in the PNG specification as the bit depth. The permitted formats encode each number as an unsigned integral value using a fixed number of bits, referred to in the PNG specification as the bit depth. PNG allows the following combinations of channels, called the color type. 0 (000) grayscale 2 (010) red, green and blue: rgb/truecolor 3 (011) indexed: channel containing indices into a palette of colors

28 28 4 (100) grayscale and alpha: level of opacity for each pixel 6 (110) red, green, blue and alpha

29 29 Questions: Short: 1. What is File Format? 2. What is Magic Number? Broad: 1. Discuss Netpbm File Format. 2. Give an example of Netpbm File Format. 3. Write short note on Pixel Format. 4. Write down PNG file Header format. 5. Write down the PNG color combination. Critical: 1. How does J can be written using PBM file format?

30 30 Lecture: Topic: Image Enhancement Keywords: Spatial domain, Frequency domain, Histogram. Image Transformation, Negative Transformation, Gamma Transformation, Log Transformation, Fourier Transformation, piecewise transformation, Bit plane slicing Image Enhancement: In computer graphics, the process of improving the quality of a digitally stored image by manipulating the image with software so that the result is to be more suitable than the original one for specific purpose. It is quite easy, for example, to make an image lighter or darker, or to increase or decrease contrast. Advanced image enhancement software also supports many filters for altering images in various ways. Programs specialized for image enhancements are sometimes called image editors. Image enhancement techniques: 1. Spatial domain method (direct manipulation of pixels of the image) 2. Frequency domain method (modifying the Fourier Transform of an image) Some useful examples: Filtering with morphological operators. Histogram equalization. Noise removal using a Wiener filter. Linear contrast adjustment. Median filtering. Unsharp mask filtering. Contrast-limited adaptive histogram equalization (CLAHE) Decorrelation stretch. Spatial domain method: direct manipulation of pixels of the image Spatial Domain Process is defined by g(x,y)=t[f(x,y)] T is an operator on f defined over a neighborhood of point (x,y)

31 31 Smallest possible neighborhood size is 1x1, it can be 5x5, 7x7 or 9x9 etc. 1x1 neighborhood operation is called as point processing and is represented by the transformation function s= T(r). Where s and r represents the intensity of g and f respectively Image Negative Let the image has an intensity level in the range [0 L-1], then the intensity transformation is given by s=l-1-r

32 32

33 33 Log Transformations For an image having intensity ranging from [0 L-1], log transformation is given by s=c log(1+r), where c is a constant Maps the narrow range of low intensity values of input levels to wider range of output levels. Higher range of high intensity input levels is mapped to narrow range of out put levels. The Log function has the important characteristic that it compresses the dynamic range of images with large variation in the pixel value. Classical example is displaying Fourier spectrum Fourier spectrum has the values in the range 0 to 1.5 x 106. These values are scaled linearly for the display in 8 bit system.

34 34 Power-law (Gamma) Transformations This has the basic form s=cr γ,where c and γ are positive constants Fractional values of γ maps a narrow range of dark input values into a wider range of output values. Opposite of this also true for higher values of input levels. These are also called as gamma correction due to the exponent in the power law equation.

35 35 CRT device has intensity to voltage response that is a power function with exponent varying from approximately 1.8 to 2.5. Such display system would produce images that are darker than intended. Gamma correction is very important when to reproduce an image exactly on a display system. Power-law transformations are also used in general purpose contrast manipulation.

36 36

37 37 Piecewise Linear transformation functions Contrast stretching Low contrast images result from the following: Poor illumination lack of dynamic range in the imaging sensor Wrong settings of the lens aperture during acquisition It is a process that expands the range of intensity levels in an image so that it spans full intensity range of the recording medium or display device

38 38

39 39 Intermediate values of (r1,s1) and (r2,s2) produces various degree of spread in the intensity

40 40

41 41 Intensity Level slicing Highlighting specific range of intensities Example : Enhancing features such as masses of water in the satellite imagery Enhancing flaws in X-ray images.

42 42

43 43 Bit Plane slicing Each pixels are digital number comprising of bits For a 256 level gray-scale image there are 8 bits for each pixel We can highlight the contribution of these bits to total image appearance

44 44 Questions: Short: 1. What is image enhancement? 2. What are the techniques of image enhancement? 3. What do you understand by intensity level slicing? Broad: 1. Give some example of useful image enhancement. 2. Write short note on Spatial Domain Method. 3. Discuss about image negative transformation. 4. Discuss about log transformation. 5. Discuss about power law transformation. 6. Discuss about piecewise linear transformation 7. Write short note about Bit Plane slicing

45 45 Lecture: Topic: Histogram Keywords: Histogram Processing, Histogram Equalization Histogram Processing Let the intensity level in the image be in the range from [0 L-1] Histogram is a discrete function h(r k )=n k, where r k is the k th intensity value and n k is the number of pixels in the image with pixel level r k. This histogram is normalized by dividing each component by total number of pixels in the image. Thus normalized histogram is given by, p(r k ) is an estimate of the probability of occurrence of intensity level r k in an image. (Sum all the components=1)

46 46

47 47

48 48

49 49 Histogram Equalization Histogram equalization is a method to process images in order to adjust the contrast of an image by modifying the intensity distribution of the histogram. The objective of this technique is to give a linear trend to the cumulative probability function associated to the image. Let us denote r [0 L-1] as intensities of the image to be processed r=0 corresponding to black and r=l-1 representing white. Let the intensity transformation is defined by s=t(r), where 0 r L-1 T(r) is monotonically increasing function in the interval 0 r L-1 0 T(r) L-1 and 0 r L-1 Suppose we use the inverse operation as r=t -1 (s), then the condition should be strictly monotonically increasing. Satisfies the condition T(r) is monotonically increasing function in the interval 0 r L-1 and 0 T(r) L-1 and 0 r L-1

50 50 Strictly monotonically increasing Mapping is one to one in both the directions Let us consider intensity levels in the image as random variables in the interval 0 to L-1. Let us defined the Probability Density Function (PDF) as p r (r) and p s (s) for r and s respectively. If p r (r) and T(r) is known, where T(r) is continuous and differentiable over the PDF range, then The transformation function is of the form

51 51 Now let us compute p s (s), we know s=t(r) Substituting this for p s (s), we get Performing intensity transformation yields a random variable s characterized by uniform PDF. T(r) depends on p r (r) but p s (s) is always uniform and independently of the form of p r (r).

52 52 Suppose intensity values in an image have the PDF Suppose L=9 and pixel at location say (x,y) has the value r=3, then s=t(r)= r 2 /9=1 The PDF of the intensities in the new image is For the discrete values of the histogram, we deal with summation instead of integration The discrete form of transformation is given by The input pixel r k is mapped to output pixel s k The transformation (mapping) T(r k ) is called as histogram equalization or histogram linearization.

53 53 Let us consider a 3 bit image (L=8) of 64 x 64 (MN=4096), has the intensity distribution shown below. From the equation of histogram equalization, we have Similarly compute s 2, s 3, s 4, s 5, s 6, s 7

54 54 Histogram equalization is an automatic enhancement. Some times shape of the histogram can be specified based on the requirement. The method used to generate a processed image that has a specified histogram is called histogram matching or histogram specification Questions: Short: 1. What is Histogram? Broad: 1. Write short note on Histogram Processing. 2. Write short note on Histogram Equalizations Critical: 1. Prove that p (s) = where 0 < s < L 1. Also show the graphical interpretation. 2. Prove that p (s) = where 0 < s < L 1 for following p.d.f. 2r p (r) for 0 < r < L 1 = (L 1) 0 otherwise

55 55 3. Consider a 3 bit image of resolution with following distribution. r r = 0 r = 1 r = 2 r = 3 r = 4 r = 5 r = 6 r = 7 n Find out the values of P (r ) and s, s, s, s, s, s, s and s. Also draw the graphs.

56 56 Lecture: 14 Topic: Correlation and Convolution Keywords: Correlation, Convolution, Windowing, Filter, 2D correlation Correlation and Convolution Given a square filter, we can compute the results of correlation by aligning the center of the filter with a pixel. Then we multiply all overlapping values together, and add up the result. We can write this as: We can perform averaging of a 2D image using a 2D box filter, which, for a 3x3 filter, will look like: Below we show an image, I, and the result of applying the box filter above to I to produce the resulting image, J.

57 57 Convolution Convolution is just like correlation, except that we flip over the filter before correlating. For example, convolution of a 1D image with the filter (3,7,5) is exactly the same as correlation with the filter (5,7,3). We can write the formula for this as: In the case of 2D convolution we flip the filter both horizontally and vertically. This can be written as: Questions: Broad: 1. What do you understand by Correlation and Convolution? Write the mathematical interpretations of both. 2. Write short note on Correlation 3. Write short note on Convolution Critical: 1. Apply three different methods of Correlation upon following pixel matrix (Use an arbitrary mask for correlations)

58 58 Lecture: Topic: Edge detection Keywords: Edge, Vertical, Horizontal, Diagonal, Operator, Prewitt, Sobel, Robinson compass, Laplacian, Mask, Canny, Gaussian, Suppression, Intensity Gradient, Edge: Sudden changes or discontinuities in an image are called as edges. Types of edges: Generally edges are of three types: Horizontal edges Vertical Edges Diagonal Edges Most of the shape information of an image is enclosed in edges. So first we detect these edges in an image and by using these filters and then by enhancing those areas of image which contains edges, sharpness of the image will increase and image will become clearer. Some of the masks for edge detection: Prewitt Operator Sobel Operator Robinson Compass Masks Krisch Compass Masks Laplacian Operator. Prewitt Operator Prewitt operator is used for detecting edges horizontally and vertically. Sobel Operator The Sobel operator is very similar to Prewitt operator. It is also a derivate mask and is used for edge detection. It also calculates edges in both horizontal and vertical direction.

59 59 Prewitt Operator Prewitt operator is used for detecting two types of edges: Horizontal Vertical Edges are calculated by using difference between corresponding pixel intensities of an image. All the masks that are used for edge detection are also known as derivative masks. All the derivative masks should have the following properties: Opposite sign should be present in the mask. Sum of mask should be equal to zero. More weight means more edge detection. Prewitt operator provides us two masks one for detecting edges in horizontal direction and another for detecting edges in an vertical direction. Vertical direction: Above mask will find the edges in vertical direction and it is because the zeros column in the vertical direction. When we will convolve this mask on an image, it will give us the vertical edges in an image. It simply works like as first order derivate and calculates the difference of pixel intensities in a edge region. As the center column is of zero so it does not include the original values of an image but rather it calculates the difference of right and left pixel values around that edge. This increase the edge intensity and it become enhanced comparatively to the original image. Horizontal Direction:

60 60 Above mask will find edges in horizontal direction and it is because that zeros column is in horizontal direction. When we will convolve this mask onto an image it would prominent horizontal edges in the image. As the center row of mask is consist of zeros so it does not include the original values of edge in the image but rather it calculate the difference of above and below pixel intensities of the particular edge. Thus increasing the sudden change of intensities and making the edge more visible. Before applying mask After applying vertical mask After applying horizontal mask In the first picture on which we apply vertical mask, all the vertical edges are more visible than the original image. Similarly in the second picture we have applied the horizontal mask and in result all the horizontal edges are visible. Sobel Operator The sobel operator is very similar to Prewitt operator. It is also a derivate mask and is used for edge detection. Like Prewitt operator sobel operator is also used to detect two kinds of edges in an image: Vertical direction Horizontal direction Difference with Prewitt Operator The major difference is that in Sobel operator the coefficients of masks are not fixed and they can be adjusted according to our requirement unless they do not violate any property of derivative masks.

61 61 Following is the vertical Mask of Sobel Operator: This mask works exactly same as the Prewitt operator vertical mask. There is only one difference that is it has 2 and -2 values in center of first and third column. When applied on an image this mask will highlight the vertical edges. As the center column is of zero so it does not include the original values of an image but rather it calculates the difference of right and left pixel values around that edge. Also the center values of both the first and third column is 2 and -2 respectively. This give more weight age to the pixel values around the edge region. This increase the edge intensity and it become enhanced comparatively to the original image. Following is the horizontal Mask of Sobel Operator Above mask will find edges in horizontal direction and it is because that zeros column is in horizontal direction. When we will convolve this mask onto an image it would prominent horizontal edges in the image. The only difference between it is that it have 2 and -2 as a center element of first and third row. This mask will prominent the horizontal edges in an image. It also works on the principle of above mask and calculates difference among the pixel intensities of a particular edge. As the center row of mask is consist of zeros so it does not include the original values of edge in the image but rather it calculate the difference of above and below pixel intensities of the particular edge. Thus increasing the sudden change of intensities and making the edge more visible.

62 62 Before applying Mask After applying Vertical Mask After applying Horizontal Mask As you can see that in the first picture on which we apply vertical mask, all the vertical edges are more visible than the original image. Similarly in the second picture we have applied the horizontal mask and in result all the horizontal edges are visible. So in this way you can see that we can detect both horizontal and vertical edges from an image. Also if we compare the result of Sobel operator with Prewitt operator, we will find that Sobel operator finds more edges or make edges more visible as compared to Prewitt Operator. This is because in Sobel operator we have allotted more weight to the pixel intensities around the edges. Applying more weight to mask Now we can also see that if we apply more weight to the mask, the more edges it will get for us. There is no fixed coefficient in sobel operator, so here is another weighted operator

63 63 If we can compare the result of this mask with of the Prewitt vertical mask, it is clear that this mask will give out more edges as compared to Prewitt one just because we have allotted more weight in the mask. Laplacian Operator Laplacian Operator is also a derivative operator which is used to find edges in an image. The major difference between Laplacian and other operators like Prewitt, Sobel is that these all are first order derivative masks but Laplacian is a second order derivative mask. In this mask we have two further classifications - one is Positive Laplacian Operator and other is Negative Laplacian Operator. Another difference between Laplacian and other operators is that unlike other operators Laplacian didn t take out edges in any particular direction but it take out edges in following classification. Inward Edges Outward Edges Positive Laplacian Operator In Positive Laplacian we have standard mask in which center element of the mask should be negative and corner elements of mask should be zero Positive Laplacian Operator is use to take out outward edges in an image. Negative Laplacian Operator In negative Laplacian operator we also have a standard mask, in which center element should be positive. All the elements in the corner should be zero and rest of all the elements in the mask should be -1.

64 Negative Laplacian operator is use to take out inward edges in an image Laplacian is a derivative operator; its uses highlight gray level discontinuities in an image and try to deemphasize regions with slowly varying gray levels. We can t apply both the positive and negative Laplacian operator on the same image. We have to apply just one but the thing to remember is that if we apply positive Laplacian operator on the image then we subtract the resultant image from the original image to get the sharpened image. Similarly if we apply negative Laplacian operator then we have to add the resultant image onto original image to get the sharpened image. Before applying Laplacian Operator After applying Positive Laplacian Operator After applying Negative Laplacian Operator Canny edge detector The Canny edge detector is an edge detection operator that uses a multi-stage algorithm to detect a wide range of edges in images.

65 65 It was developed by John F. Canny in Canny also produced a computational theory of edge detection explaining why the technique works. Canny edge detection is a technique to extract useful structural information from different vision objects and dramatically reduce the amount of data to be processed. It has been widely applied in various computer vision systems. Canny has found that the requirements for the application of edge detection on diverse vision systems are relatively similar. Thus, an edge detection solution to address these requirements can be implemented in a wide range of situations. The general criteria for edge detection include: Detection of edge with low error rate, which means that the detection should accurately catch as many edges shown in the image as possible The edge point detected from the operator should accurately localize on the center of the edge. A given edge in the image should only be marked once, and where possible, image noise should not create false edges. To satisfy these requirements Canny used the calculus variation a technique which finds the function which optimizes a given functional. The optimal function in Canny's detector is described by the sum of four exponential terms, but it can be approximated by the first derivative of a Gaussian. Among the edge detection methods developed so far, Canny edge detection algorithm is one of the most strictly defined methods that provides good and reliable detection. Process of Canny edge detection algorithm The Process of Canny edge detection algorithm can be broken down to 5 different steps: 1. Apply Gaussian filter to smooth the image in order to remove the noise 2. Find the intensity gradients of the image 3. Apply non-maximum suppression to get rid of spurious response to edge detection 4. Apply double threshold to determine potential edges 5. Track edge by hysteresis: Finalize the detection of edges by suppressing all the other edges that are weak and not connected to strong edges.

66 66 Gaussian filter Since all edge detection results are easily affected by image noise, it is essential to filter out the noise to prevent false detection caused by noise. To smooth the image, a Gaussian filter is applied to convolve with the image. This step will slightly smooth the image to reduce the effects of obvious noise on the edge detector. The equation for a Gaussian filter kernel of size (2k+1) (2k+1) is given by: Here is an example of a 5 5 Gaussian filter, used to create the adjacent image, with = 1.4. The asterisk denotes a convolution operation. The selection of the size of the Gaussian kernel will affect the performance of the detector. The larger the size is, the lower the detector s sensitivity to noise. Additionally, the localization error to detect the edge will slightly increase with the increase of the Gaussian filter kernel size. A 5 5 is a good size for most cases, but this will also vary depending on specific situations. Finding the intensity gradient of the image An edge in an image may point in a variety of directions, so the Canny algorithm uses four filters to detect horizontal, vertical and diagonal edges in the blurred image.

67 67 The edge detection operator (such as Robert, Prewitt, or Sobel) returns a value for the first derivative in the horizontal direction (G x ) and the vertical direction (G y ). From this the edge gradient and direction can be determined: where G can be computed using the hypot function and atan2 is the arctangent function with two arguments. The edge direction angle is rounded to one of four angles representing vertical, horizontal and the two diagonals (0, 45, 90 and 135 ). An edge direction falling in each color region will be set to a specific angle values, for instance θ in [0, 22.5 ] or [157.5, 180 ] maps to 0. Non-maximum suppression Non-maximum suppression is an edge thinning technique. Non-Maximum suppression is applied to "thin" the edge. After applying gradient calculation, the edge extracted from the gradient value is still quite blurred. With respect to criterion 3, there should only be one accurate response to the edge. Thus non-maximum suppression can help to suppress all the gradient values (by setting them to 0) except the local maxima, which indicate locations with the sharpest change of intensity value. The algorithm for each pixel in the gradient image is: 1. Compare the edge strength of the current pixel with the edge strength of the pixel in the positive and negative gradient directions. 2. If the edge strength of the current pixel is the largest compared to the other pixels in the mask with the same direction (i.e., the pixel that is pointing in the y-direction, it will be compared to the pixel above and below it in the vertical axis), the value will be preserved. Otherwise, the value will be suppressed. In some implementations, the algorithm categorizes the continuous gradient directions into a small set of discrete directions, and then moves a 3x3 filter over the output of the previous step (that is, the edge strength and gradient directions). At every pixel, it suppresses the edge strength

68 68 of the center pixel (by setting its value to 0) if its magnitude is not greater than the magnitude of the two neighbors in the gradient direction. For example, if the rounded gradient angle is 0 (i.e. the edge is in the north-south direction) the point will be considered to be on the edge if its gradient magnitude is greater than the magnitudes at pixels in the east and west directions, if the rounded gradient angle is 90 (i.e. the edge is in the east-west direction) the point will be considered to be on the edge if its gradient magnitude is greater than the magnitudes at pixels in the north and south directions, if the rounded gradient angle is 135 (i.e. the edge is in the northeast-southwest direction) the point will be considered to be on the edge if its gradient magnitude is greater than the magnitudes at pixels in the north west and south-east directions, if the rounded gradient angle is 45 (i.e. the edge is in the north west south east direction) the point will be considered to be on the edge if its gradient magnitude is greater than the magnitudes at pixels in the north east and south west directions. In more accurate implementations, linear interpolation is used between the two neighbouring pixels that straddle the gradient direction. For example, if the gradient angle is between 45 and 90, interpolation between gradients at the north and north east pixels will give one interpolated value, and interpolation between the south and south west pixels will give the other (using the conventions of the last paragraph). The gradient magnitude at the central pixel must be greater than both of these for it to be marked as an edge. Note that the sign of the direction is irrelevant, i.e. north south is the same as south north and so on. Double threshold After application of non-maximum suppression, remaining edge pixels provide a more accurate representation of real edges in an image. However, some edge pixels remain that are caused by noise and color variation. In order to account for these spurious responses, it is essential to filter out edge pixels with a weak gradient value and preserve edge pixels with a high gradient value. This is accomplished by selecting high and low threshold values. If an edge pixel s gradient value is higher than the high threshold value, it is marked as a strong edge pixel. If an edge pixel s gradient value is smaller than the high threshold value and larger than the low threshold value, it is marked as a weak edge pixel. If an edge pixel's value is smaller than the low threshold value, it will be suppressed. The two threshold values are empirically determined and their definition will depend on the content of a given input image.

69 69 Edge tracking by hysteresis So far, the strong edge pixels should certainly be involved in the final edge image, as they are extracted from the true edges in the image. However, there will be some debate on the weak edge pixels, as these pixels can either be extracted from the true edge, or the noise/color variations. To achieve an accurate result, the weak edges caused by the latter reasons should be removed. Usually a weak edge pixel caused from true edges will be connected to a strong edge pixel while noise responses are unconnected. To track the edge connection, blob analysis is applied by looking at a weak edge pixel and its 8-connected neighborhood pixels. As long as there is one strong edge pixel that is involved in the blob, that weak edge point can be identified as one that should be preserved. The original image. The Canny edge detector applied to a color photograph of a steam engine.

70 70 Canny edge detection applied to a photograph Questions: Short: 1. What is an edge on an image? 2. What are the types of images? 3. What are the masks used for edge detection? 4. What is the difference between Prewitt operator and Sobel operator? Board: 1. Write short note on Prewitt Operator. 2. Write short note on Sobel Operator. 3. Discuss Laplacian Operator with Positive and Negative Laplacian operator. 4. Write about Canny Edge Detector including Canny Edge detection algorithm. 5. Write short note on Gaussian Filter. 6. How would you like to find intensity gradient of an image? 7. Discuss about non-maximum suppression. 8. Write a short note on Double Threshold. 9. How does edges tracked by Hysteresis?

71 71 Lecture: Topic: Fourier Transformation Keywords: Fourier series, Fourier Transformation, Discrete Fourier Transformation Fourier Series and Transform Fourier Fourier was a mathematician in He gave Fourier series and Fourier transform to convert a signal into frequency domain. Fourier Series Fourier series simply states that, periodic signals can be represented into sum of sines and cosines when multiplied with a certain weight. It further states that periodic signals can be broken down into further signals with the following properties. The signals are sines and cosines The signals are harmonics of each other The last signal is actually the sum of all the above signals. This was the idea of the Fourier In order to process an image in frequency domain, we need to first convert it using into frequency domain and we have to take inverse of the output to convert it back into spatial domain. That s why both Fourier series and Fourier transform has two formulas. One for conversion and one converting it back to the spatial domain.

72 72 The Fourier series can be denoted by this formula. The inverse can be calculated by this formula. Fourier transform The Fourier transform simply states that that the non periodic signals whose area under the curve is finite can also be represented into integrals of the sines and cosines after being multiplied by a certain weight. The Fourier transform has many wide applications that include, image compression (e.g JPEG compression), filtering and image analysis. Both Fourier series and Fourier transform are given by Fourier, but the difference between them is Fourier series is applied on periodic signals and Fourier transform is applied for non periodic signals Images are non periodic. And since the images are non periodic, so Fourier transform is used to convert them into frequency domain. Since we are dealing with images, and in fact digital images, so for digital images we will be working on discrete fourier transform Discrete Fourier transform Fourier transform includes three things: Spatial Frequency Magnitude Phase The spatial frequency directly relates with the brightness of the image. The magnitude of the sinusoid directly relates with the contrast.

73 73 Contrast is the difference between maximum and minimum pixel intensity. Phase contains the color information. The formula for 2 dimensional discrete Fourier transform is given below. The discrete Fourier transform is actually the sampled Fourier transform, so it contains some samples that denote an image. In the above formula f(x,y) denotes the image, and F(u,v) denotes the discrete Fourier transform. The formula for 2 dimensional Inverse discrete Fourier transform is given below. The inverse discrete Fourier transform converts the Fourier transform back to the image Original Image

74 74 The Fourier transform magnitude spectrum The Shifted Fourier transform The Shifted Magnitude Spectrum Questions: Short: 1. What is Fourier series?

Digital Image Processing

Digital Image Processing Digital Image Processing Lecture # 5 Image Enhancement in Spatial Domain- I ALI JAVED Lecturer SOFTWARE ENGINEERING DEPARTMENT U.E.T TAXILA Email:: ali.javed@uettaxila.edu.pk Office Room #:: 7 Presentation

More information

CoE4TN4 Image Processing. Chapter 3: Intensity Transformation and Spatial Filtering

CoE4TN4 Image Processing. Chapter 3: Intensity Transformation and Spatial Filtering CoE4TN4 Image Processing Chapter 3: Intensity Transformation and Spatial Filtering Image Enhancement Enhancement techniques: to process an image so that the result is more suitable than the original image

More information

Images and Graphics. 4. Images and Graphics - Copyright Denis Hamelin - Ryerson University

Images and Graphics. 4. Images and Graphics - Copyright Denis Hamelin - Ryerson University Images and Graphics Images and Graphics Graphics and images are non-textual information that can be displayed and printed. Graphics (vector graphics) are an assemblage of lines, curves or circles with

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Part 2: Image Enhancement Digital Image Processing Course Introduction in the Spatial Domain Lecture AASS Learning Systems Lab, Teknik Room T26 achim.lilienthal@tech.oru.se Course

More information

TDI2131 Digital Image Processing

TDI2131 Digital Image Processing TDI2131 Digital Image Processing Image Enhancement in Spatial Domain Lecture 3 John See Faculty of Information Technology Multimedia University Some portions of content adapted from Zhu Liu, AT&T Labs.

More information

LAB MANUAL SUBJECT: IMAGE PROCESSING BE (COMPUTER) SEM VII

LAB MANUAL SUBJECT: IMAGE PROCESSING BE (COMPUTER) SEM VII LAB MANUAL SUBJECT: IMAGE PROCESSING BE (COMPUTER) SEM VII IMAGE PROCESSING INDEX CLASS: B.E(COMPUTER) SR. NO SEMESTER:VII TITLE OF THE EXPERIMENT. 1 Point processing in spatial domain a. Negation of an

More information

Prof. Vidya Manian Dept. of Electrical and Comptuer Engineering

Prof. Vidya Manian Dept. of Electrical and Comptuer Engineering Image Processing Intensity Transformations Chapter 3 Prof. Vidya Manian Dept. of Electrical and Comptuer Engineering INEL 5327 ECE, UPRM Intensity Transformations 1 Overview Background Basic intensity

More information

Fundamentals of Multimedia

Fundamentals of Multimedia Fundamentals of Multimedia Lecture 2 Graphics & Image Data Representation Mahmoud El-Gayyar elgayyar@ci.suez.edu.eg Outline Black & white imags 1 bit images 8-bit gray-level images Image histogram Dithering

More information

IMAGE ENHANCEMENT - POINT PROCESSING

IMAGE ENHANCEMENT - POINT PROCESSING 1 IMAGE ENHANCEMENT - POINT PROCESSING KOM3212 Image Processing in Industrial Systems Some of the contents are adopted from R. C. Gonzalez, R. E. Woods, Digital Image Processing, 2nd edition, Prentice

More information

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and 8.1 INTRODUCTION In this chapter, we will study and discuss some fundamental techniques for image processing and image analysis, with a few examples of routines developed for certain purposes. 8.2 IMAGE

More information

IMAGE ENHANCEMENT IN SPATIAL DOMAIN

IMAGE ENHANCEMENT IN SPATIAL DOMAIN A First Course in Machine Vision IMAGE ENHANCEMENT IN SPATIAL DOMAIN By: Ehsan Khoramshahi Definitions The principal objective of enhancement is to process an image so that the result is more suitable

More information

Image Processing Lecture 4

Image Processing Lecture 4 Image Enhancement Image enhancement aims to process an image so that the output image is more suitable than the original. It is used to solve some computer imaging problems, or to improve image quality.

More information

Image Enhancement using Histogram Equalization and Spatial Filtering

Image Enhancement using Histogram Equalization and Spatial Filtering Image Enhancement using Histogram Equalization and Spatial Filtering Fari Muhammad Abubakar 1 1 Department of Electronics Engineering Tianjin University of Technology and Education (TUTE) Tianjin, P.R.

More information

Image Enhancement in the Spatial Domain (Part 1)

Image Enhancement in the Spatial Domain (Part 1) Image Enhancement in the Spatial Domain (Part 1) Lecturer: Dr. Hossam Hassan Email : hossameldin.hassan@eng.asu.edu.eg Computers and Systems Engineering Principle Objective of Enhancement Process an image

More information

ECC419 IMAGE PROCESSING

ECC419 IMAGE PROCESSING ECC419 IMAGE PROCESSING INTRODUCTION Image Processing Image processing is a subclass of signal processing concerned specifically with pictures. Digital Image Processing, process digital images by means

More information

Digital Image Processing. Lecture # 3 Image Enhancement

Digital Image Processing. Lecture # 3 Image Enhancement Digital Image Processing Lecture # 3 Image Enhancement 1 Image Enhancement Image Enhancement 3 Image Enhancement 4 Image Enhancement Process an image so that the result is more suitable than the original

More information

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 - COMPUTERIZED IMAGING Section I: Chapter 2 RADT 3463 Computerized Imaging 1 SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 COMPUTERIZED IMAGING Section I: Chapter 2 RADT

More information

Chapter 3 Graphics and Image Data Representations

Chapter 3 Graphics and Image Data Representations Chapter 3 Graphics and Image Data Representations 3.1 Graphics/Image Data Types 3.2 Popular File Formats Li, Drew, & Liu 1 1 3.1 Graphics/Image Data Types The number of file formats used in multimedia

More information

DIP - QUICK GUIDE DIGITAL IMAGE PROCESSING INTRODUCTION

DIP - QUICK GUIDE DIGITAL IMAGE PROCESSING INTRODUCTION http://www.tutorialspoint.com/dip/dip_quick_guide.htm DIP - QUICK GUIDE Copyright tutorialspoint.com Introduction DIGITAL IMAGE PROCESSING INTRODUCTION Signal processing is a discipline in electrical engineering

More information

Midterm Examination CS 534: Computational Photography

Midterm Examination CS 534: Computational Photography Midterm Examination CS 534: Computational Photography November 3, 2015 NAME: SOLUTIONS Problem Score Max Score 1 8 2 8 3 9 4 4 5 3 6 4 7 6 8 13 9 7 10 4 11 7 12 10 13 9 14 8 Total 100 1 1. [8] What are

More information

OFFSET AND NOISE COMPENSATION

OFFSET AND NOISE COMPENSATION OFFSET AND NOISE COMPENSATION AO 10V 8.1 Offset and fixed pattern noise reduction Offset variation - shading AO 10V 8.2 Row Noise AO 10V 8.3 Offset compensation Global offset calibration Dark level is

More information

Image Filtering. Median Filtering

Image Filtering. Median Filtering Image Filtering Image filtering is used to: Remove noise Sharpen contrast Highlight contours Detect edges Other uses? Image filters can be classified as linear or nonlinear. Linear filters are also know

More information

Image Enhancement in Spatial Domain

Image Enhancement in Spatial Domain Image Enhancement in Spatial Domain 2 Image enhancement is a process, rather a preprocessing step, through which an original image is made suitable for a specific application. The application scenarios

More information

The BIOS in many personal computers stores the date and time in BCD. M-Mushtaq Hussain

The BIOS in many personal computers stores the date and time in BCD. M-Mushtaq Hussain Practical applications of BCD The BIOS in many personal computers stores the date and time in BCD Images How data for a bitmapped image is encoded? A bitmap images take the form of an array, where the

More information

Preparing Remote Sensing Data for Natural Resources Mapping (image enhancement, rectifications )

Preparing Remote Sensing Data for Natural Resources Mapping (image enhancement, rectifications ) Preparing Remote Sensing Data for Natural Resources Mapping (image enhancement, rectifications ) Why is this important What are the major approaches Examples of digital image enhancement Follow up exercises

More information

DIGITAL IMAGE PROCESSING (COM-3371) Week 2 - January 14, 2002

DIGITAL IMAGE PROCESSING (COM-3371) Week 2 - January 14, 2002 DIGITAL IMAGE PROCESSING (COM-3371) Week 2 - January 14, 22 Topics: Human eye Visual phenomena Simple image model Image enhancement Point processes Histogram Lookup tables Contrast compression and stretching

More information

Hello, welcome to the video lecture series on Digital Image Processing.

Hello, welcome to the video lecture series on Digital Image Processing. Digital Image Processing. Professor P. K. Biswas. Department of Electronics and Electrical Communication Engineering. Indian Institute of Technology, Kharagpur. Lecture-33. Contrast Stretching Operation.

More information

Image Processing for feature extraction

Image Processing for feature extraction Image Processing for feature extraction 1 Outline Rationale for image pre-processing Gray-scale transformations Geometric transformations Local preprocessing Reading: Sonka et al 5.1, 5.2, 5.3 2 Image

More information

Images and Displays. Lecture Steve Marschner 1

Images and Displays. Lecture Steve Marschner 1 Images and Displays Lecture 2 2008 Steve Marschner 1 Introduction Computer graphics: The study of creating, manipulating, and using visual images in the computer. What is an image? A photographic print?

More information

Table of contents. Vision industrielle 2002/2003. Local and semi-local smoothing. Linear noise filtering: example. Convolution: introduction

Table of contents. Vision industrielle 2002/2003. Local and semi-local smoothing. Linear noise filtering: example. Convolution: introduction Table of contents Vision industrielle 2002/2003 Session - Image Processing Département Génie Productique INSA de Lyon Christian Wolf wolf@rfv.insa-lyon.fr Introduction Motivation, human vision, history,

More information

Non Linear Image Enhancement

Non Linear Image Enhancement Non Linear Image Enhancement SAIYAM TAKKAR Jaypee University of information technology, 2013 SIMANDEEP SINGH Jaypee University of information technology, 2013 Abstract An image enhancement algorithm based

More information

LECTURE 02 IMAGE AND GRAPHICS

LECTURE 02 IMAGE AND GRAPHICS MULTIMEDIA TECHNOLOGIES LECTURE 02 IMAGE AND GRAPHICS IMRAN IHSAN ASSISTANT PROFESSOR THE NATURE OF DIGITAL IMAGES An image is a spatial representation of an object, a two dimensional or three-dimensional

More information

Achim J. Lilienthal Mobile Robotics and Olfaction Lab, AASS, Örebro University

Achim J. Lilienthal Mobile Robotics and Olfaction Lab, AASS, Örebro University Achim J. Lilienthal Mobile Robotics and Olfaction Lab, Room T29, Mo, -2 o'clock AASS, Örebro University (please drop me an email in advance) achim.lilienthal@oru.se 4.!!!!!!!!! Pre-Class Reading!!!!!!!!!

More information

1.Discuss the frequency domain techniques of image enhancement in detail.

1.Discuss the frequency domain techniques of image enhancement in detail. 1.Discuss the frequency domain techniques of image enhancement in detail. Enhancement In Frequency Domain: The frequency domain methods of image enhancement are based on convolution theorem. This is represented

More information

Digital Imaging Rochester Institute of Technology

Digital Imaging Rochester Institute of Technology Digital Imaging 1999 Rochester Institute of Technology So Far... camera AgX film processing image AgX photographic film captures image formed by the optical elements (lens). Unfortunately, the processing

More information

Bitmap Image Formats

Bitmap Image Formats LECTURE 5 Bitmap Image Formats CS 5513 Multimedia Systems Spring 2009 Imran Ihsan Principal Design Consultant OPUSVII www.opuseven.com Faculty of Engineering & Applied Sciences 1. Image Formats To store

More information

Image Processing Computer Graphics I Lecture 20. Display Color Models Filters Dithering Image Compression

Image Processing Computer Graphics I Lecture 20. Display Color Models Filters Dithering Image Compression 15-462 Computer Graphics I Lecture 2 Image Processing April 18, 22 Frank Pfenning Carnegie Mellon University http://www.cs.cmu.edu/~fp/courses/graphics/ Display Color Models Filters Dithering Image Compression

More information

Chapter 3 Graphics and Image Data Representations

Chapter 3 Graphics and Image Data Representations Chapter 3 Graphics and Image Data Representations 3.1 Graphics/Image Data Types 3.2 Popular File Formats 3.3 Further Exploration 1 Li & Drew c Prentice Hall 2003 3.1 Graphics/Image Data Types The number

More information

3.1 Graphics/Image age Data Types. 3.2 Popular File Formats

3.1 Graphics/Image age Data Types. 3.2 Popular File Formats Chapter 3 Graphics and Image Data Representations 3.1 Graphics/Image Data Types 3.2 Popular File Formats 3.1 Graphics/Image age Data Types The number of file formats used in multimedia continues to proliferate.

More information

Course Objectives & Structure

Course Objectives & Structure Course Objectives & Structure Digital imaging is at the heart of science, medicine, entertainment, engineering, and communications. This course provides an introduction to mathematical tools for the analysis

More information

Introduction to Color Theory

Introduction to Color Theory Systems & Biomedical Engineering Department SBE 306B: Computer Systems III (Computer Graphics) Dr. Ayman Eldeib Spring 2018 Introduction to With colors you can set a mood, attract attention, or make a

More information

Computer Vision. Intensity transformations

Computer Vision. Intensity transformations Computer Vision Intensity transformations Filippo Bergamasco (filippo.bergamasco@unive.it) http://www.dais.unive.it/~bergamasco DAIS, Ca Foscari University of Venice Academic year 2016/2017 Introduction

More information

NON UNIFORM BACKGROUND REMOVAL FOR PARTICLE ANALYSIS BASED ON MORPHOLOGICAL STRUCTURING ELEMENT:

NON UNIFORM BACKGROUND REMOVAL FOR PARTICLE ANALYSIS BASED ON MORPHOLOGICAL STRUCTURING ELEMENT: IJCE January-June 2012, Volume 4, Number 1 pp. 59 67 NON UNIFORM BACKGROUND REMOVAL FOR PARTICLE ANALYSIS BASED ON MORPHOLOGICAL STRUCTURING ELEMENT: A COMPARATIVE STUDY Prabhdeep Singh1 & A. K. Garg2

More information

What is image enhancement? Point operation

What is image enhancement? Point operation IMAGE ENHANCEMENT 1 What is image enhancement? Image enhancement techniques Point operation 2 What is Image Enhancement? Image enhancement is to process an image so that the result is more suitable than

More information

PRACTICAL IMAGE AND VIDEO PROCESSING USING MATLAB

PRACTICAL IMAGE AND VIDEO PROCESSING USING MATLAB PRACTICAL IMAGE AND VIDEO PROCESSING USING MATLAB OGE MARQUES Florida Atlantic University *IEEE IEEE PRESS WWILEY A JOHN WILEY & SONS, INC., PUBLICATION CONTENTS LIST OF FIGURES LIST OF TABLES FOREWORD

More information

Image acquisition. Midterm Review. Digitization, line of image. Digitization, whole image. Geometric transformations. Interpolation 10/26/2016

Image acquisition. Midterm Review. Digitization, line of image. Digitization, whole image. Geometric transformations. Interpolation 10/26/2016 Image acquisition Midterm Review Image Processing CSE 166 Lecture 10 2 Digitization, line of image Digitization, whole image 3 4 Geometric transformations Interpolation CSE 166 Transpose these matrices

More information

4 Images and Graphics

4 Images and Graphics LECTURE 4 Images and Graphics CS 5513 Multimedia Systems Spring 2009 Imran Ihsan Principal Design Consultant OPUSVII www.opuseven.com Faculty of Engineering & Applied Sciences 1. The Nature of Digital

More information

Dr. Shahanawaj Ahamad. Dr. S.Ahamad, SWE-423, Unit-06

Dr. Shahanawaj Ahamad. Dr. S.Ahamad, SWE-423, Unit-06 Dr. Shahanawaj Ahamad 1 Outline: Basic concepts underlying Images Popular Image File formats Human perception of color Various Color Models in use and the idea behind them 2 Pixels -- picture elements

More information

Image Processing. 2. Point Processes. Computer Engineering, Sejong University Dongil Han. Spatial domain processing

Image Processing. 2. Point Processes. Computer Engineering, Sejong University Dongil Han. Spatial domain processing Image Processing 2. Point Processes Computer Engineering, Sejong University Dongil Han Spatial domain processing g(x,y) = T[f(x,y)] f(x,y) : input image g(x,y) : processed image T[.] : operator on f, defined

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Filter Design Circularly symmetric 2-D low-pass filter Pass-band radial frequency: ω p Stop-band radial frequency: ω s 1 δ p Pass-band tolerances: δ

More information

Fig Color spectrum seen by passing white light through a prism.

Fig Color spectrum seen by passing white light through a prism. 1. Explain about color fundamentals. Color of an object is determined by the nature of the light reflected from it. When a beam of sunlight passes through a glass prism, the emerging beam of light is not

More information

SYLLABUS CHAPTER - 2 : INTENSITY TRANSFORMATIONS. Some Basic Intensity Transformation Functions, Histogram Processing.

SYLLABUS CHAPTER - 2 : INTENSITY TRANSFORMATIONS. Some Basic Intensity Transformation Functions, Histogram Processing. Contents i SYLLABUS UNIT - I CHAPTER - 1 : INTRODUCTION TO DIGITAL IMAGE PROCESSING Introduction, Origins of Digital Image Processing, Applications of Digital Image Processing, Fundamental Steps, Components,

More information

Chapter 6. [6]Preprocessing

Chapter 6. [6]Preprocessing Chapter 6 [6]Preprocessing As mentioned in chapter 4, the first stage in the HCR pipeline is preprocessing of the image. We have seen in earlier chapters why this is very important and at the same time

More information

IMAGE PROCESSING: AREA OPERATIONS (FILTERING)

IMAGE PROCESSING: AREA OPERATIONS (FILTERING) IMAGE PROCESSING: AREA OPERATIONS (FILTERING) N. C. State University CSC557 Multimedia Computing and Networking Fall 2001 Lecture # 13 IMAGE PROCESSING: AREA OPERATIONS (FILTERING) N. C. State University

More information

1 Li & Drew c Prentice Hall Li & Drew c Prentice Hall 2003

1 Li & Drew c Prentice Hall Li & Drew c Prentice Hall 2003 Chapter 3 Graphics and Image Data Representations 3.1 Graphics/Image Data Types 3.2 Popular File Formats 3.3 Further Exploration 3.1 Graphics/Image Data Types The number of file formats used in multimedia

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Digital Imaging Fundamentals Christophoros Nikou cnikou@cs.uoi.gr Images taken from: R. Gonzalez and R. Woods. Digital Image Processing, Prentice Hall, 2008. Digital Image Processing

More information

Vision Review: Image Processing. Course web page:

Vision Review: Image Processing. Course web page: Vision Review: Image Processing Course web page: www.cis.udel.edu/~cer/arv September 7, Announcements Homework and paper presentation guidelines are up on web page Readings for next Tuesday: Chapters 6,.,

More information

Computer Vision. Howie Choset Introduction to Robotics

Computer Vision. Howie Choset   Introduction to Robotics Computer Vision Howie Choset http://www.cs.cmu.edu.edu/~choset Introduction to Robotics http://generalrobotics.org What is vision? What is computer vision? Edge Detection Edge Detection Interest points

More information

Bit Depth. Introduction

Bit Depth. Introduction Colourgen Limited Tel: +44 (0)1628 588700 The AmBer Centre Sales: +44 (0)1628 588733 Oldfield Road, Maidenhead Support: +44 (0)1628 588755 Berkshire, SL6 1TH Accounts: +44 (0)1628 588766 United Kingdom

More information

Digital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye

Digital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye Digital Image Processing 2 Digital Image Fundamentals Digital Imaging Fundamentals Christophoros Nikou cnikou@cs.uoi.gr Those who wish to succeed must ask the right preliminary questions Aristotle Images

More information

Digital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye

Digital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye Digital Image Processing 2 Digital Image Fundamentals Digital Imaging Fundamentals Christophoros Nikou cnikou@cs.uoi.gr Images taken from: R. Gonzalez and R. Woods. Digital Image Processing, Prentice Hall,

More information

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Gonzales & Woods, Emmanuel Agu Suleyman Tosun

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Gonzales & Woods, Emmanuel Agu Suleyman Tosun BSB663 Image Processing Pinar Duygulu Slides are adapted from Gonzales & Woods, Emmanuel Agu Suleyman Tosun Histograms Histograms Histograms Histograms Histograms Interpreting histograms Histograms Image

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Digital Imaging Fundamentals Christophoros Nikou cnikou@cs.uoi.gr Images taken from: R. Gonzalez and R. Woods. Digital Image Processing, Prentice Hall, 2008. Digital Image Processing

More information

Image Perception & 2D Images

Image Perception & 2D Images Image Perception & 2D Images Vision is a matter of perception. Perception is a matter of vision. ES Overview Introduction to ES 2D Graphics in Entertainment Systems Sound, Speech & Music 3D Graphics in

More information

BBM 413 Fundamentals of Image Processing. Erkut Erdem Dept. of Computer Engineering Hacettepe University. Point Operations Histogram Processing

BBM 413 Fundamentals of Image Processing. Erkut Erdem Dept. of Computer Engineering Hacettepe University. Point Operations Histogram Processing BBM 413 Fundamentals of Image Processing Erkut Erdem Dept. of Computer Engineering Hacettepe University Point Operations Histogram Processing Today s topics Point operations Histogram processing Today

More information

BBM 413 Fundamentals of Image Processing. Erkut Erdem Dept. of Computer Engineering Hacettepe University. Point Operations Histogram Processing

BBM 413 Fundamentals of Image Processing. Erkut Erdem Dept. of Computer Engineering Hacettepe University. Point Operations Histogram Processing BBM 413 Fundamentals of Image Processing Erkut Erdem Dept. of Computer Engineering Hacettepe University Point Operations Histogram Processing Today s topics Point operations Histogram processing Today

More information

What is an image? Images and Displays. Representative display technologies. An image is:

What is an image? Images and Displays. Representative display technologies. An image is: What is an image? Images and Displays A photographic print A photographic negative? This projection screen Some numbers in RAM? CS465 Lecture 2 2005 Steve Marschner 1 2005 Steve Marschner 2 An image is:

More information

Image Enhancement in the Spatial Domain

Image Enhancement in the Spatial Domain Image Enhancement in the Spatial Domain Algorithms for improving the visual appearance of images Gamma correction Contrast improvements Histogram equalization Noise reduction Image sharpening Optimality

More information

Lane Detection in Automotive

Lane Detection in Automotive Lane Detection in Automotive Contents Introduction... 2 Image Processing... 2 Reading an image... 3 RGB to Gray... 3 Mean and Gaussian filtering... 5 Defining our Region of Interest... 6 BirdsEyeView Transformation...

More information

Applying mathematics to digital image processing using a spreadsheet

Applying mathematics to digital image processing using a spreadsheet Jeff Waldock Applying mathematics to digital image processing using a spreadsheet Jeff Waldock Department of Engineering and Mathematics Sheffield Hallam University j.waldock@shu.ac.uk Introduction When

More information

Digital Image Processing. Lecture # 8 Color Processing

Digital Image Processing. Lecture # 8 Color Processing Digital Image Processing Lecture # 8 Color Processing 1 COLOR IMAGE PROCESSING COLOR IMAGE PROCESSING Color Importance Color is an excellent descriptor Suitable for object Identification and Extraction

More information

DIGITAL IMAGE PROCESSING Quiz exercises preparation for the midterm exam

DIGITAL IMAGE PROCESSING Quiz exercises preparation for the midterm exam DIGITAL IMAGE PROCESSING Quiz exercises preparation for the midterm exam In the following set of questions, there are, possibly, multiple correct answers (1, 2, 3 or 4). Mark the answers you consider correct.

More information

Computer Graphics. Rendering. Rendering 3D. Images & Color. Scena 3D rendering image. Human Visual System: the retina. Human Visual System

Computer Graphics. Rendering. Rendering 3D. Images & Color. Scena 3D rendering image. Human Visual System: the retina. Human Visual System Rendering Rendering 3D Scena 3D rendering image Computer Graphics Università dell Insubria Corso di Laurea in Informatica Anno Accademico 2014/15 Marco Tarini Images & Color M a r c o T a r i n i C o m

More information

BBM 413! Fundamentals of! Image Processing!

BBM 413! Fundamentals of! Image Processing! BBM 413! Fundamentals of! Image Processing! Today s topics" Point operations! Histogram processing! Erkut Erdem" Dept. of Computer Engineering" Hacettepe University" "! Point Operations! Histogram Processing!

More information

Digital Imaging - Photoshop

Digital Imaging - Photoshop Digital Imaging - Photoshop A digital image is a computer representation of a photograph. It is composed of a grid of tiny squares called pixels (picture elements). Each pixel has a position on the grid

More information

Chapter 8. Representing Multimedia Digitally

Chapter 8. Representing Multimedia Digitally Chapter 8 Representing Multimedia Digitally Learning Objectives Explain how RGB color is represented in bytes Explain the difference between bits and binary numbers Change an RGB color by binary addition

More information

MATLAB 6.5 Image Processing Toolbox Tutorial

MATLAB 6.5 Image Processing Toolbox Tutorial MATLAB 6.5 Image Processing Toolbox Tutorial The purpose of this tutorial is to gain familiarity with MATLAB s Image Processing Toolbox. This tutorial does not contain all of the functions available in

More information

Computing for Engineers in Python

Computing for Engineers in Python Computing for Engineers in Python Lecture 10: Signal (Image) Processing Autumn 2011-12 Some slides incorporated from Benny Chor s course 1 Lecture 9: Highlights Sorting, searching and time complexity Preprocessing

More information

Common File Formats. Need to store an image on disk Real photos Synthetic renderings Composed images. Desirable Features High quality.

Common File Formats. Need to store an image on disk Real photos Synthetic renderings Composed images. Desirable Features High quality. Image File Format 1 Common File Formats Need to store an image on disk Real photos Synthetic renderings Composed images Multiple sources Desirable Features High quality Lossy vs Lossless formats Channel

More information

15110 Principles of Computing, Carnegie Mellon University

15110 Principles of Computing, Carnegie Mellon University 1 Last Time Data Compression Information and redundancy Huffman Codes ALOHA Fixed Width: 0001 0110 1001 0011 0001 20 bits Huffman Code: 10 0000 010 0001 10 15 bits 2 Overview Human sensory systems and

More information

Lecture 4: Spatial Domain Processing and Image Enhancement

Lecture 4: Spatial Domain Processing and Image Enhancement I2200: Digital Image processing Lecture 4: Spatial Domain Processing and Image Enhancement Prof. YingLi Tian Sept. 27, 2017 Department of Electrical Engineering The City College of New York The City University

More information

Multimedia-Systems: Image & Graphics

Multimedia-Systems: Image & Graphics Multimedia-Systems: Image & Graphics Prof. Dr.-Ing. Ralf Steinmetz Prof. Dr. Max Mühlhäuser MM: TU Darmstadt - Darmstadt University of Technology, Dept. of of Computer Science TK - Telecooperation, Tel.+49

More information

Raster Image File Formats

Raster Image File Formats Raster Image File Formats 1995-2016 Josef Pelikán & Alexander Wilkie CGG MFF UK Praha pepca@cgg.mff.cuni.cz http://cgg.mff.cuni.cz/~pepca/ 1 / 35 Raster Image Capture Camera Area sensor (CCD, CMOS) Colours:

More information

Multimedia. Graphics and Image Data Representations (Part 2)

Multimedia. Graphics and Image Data Representations (Part 2) Course Code 005636 (Fall 2017) Multimedia Graphics and Image Data Representations (Part 2) Prof. S. M. Riazul Islam, Dept. of Computer Engineering, Sejong University, Korea E-mail: riaz@sejong.ac.kr Outline

More information

Digital Image Fundamentals and Image Enhancement in the Spatial Domain

Digital Image Fundamentals and Image Enhancement in the Spatial Domain Digital Image Fundamentals and Image Enhancement in the Spatial Domain Mohamed N. Ahmed, Ph.D. Introduction An image may be defined as 2D function f(x,y), where x and y are spatial coordinates. The amplitude

More information

Computer and Machine Vision

Computer and Machine Vision Computer and Machine Vision Lecture Week 7 Part-2 (Exam #1 Review) February 26, 2014 Sam Siewert Outline of Week 7 Basic Convolution Transform Speed-Up Concepts for Computer Vision Hough Linear Transform

More information

COURSE ECE-411 IMAGE PROCESSING. Er. DEEPAK SHARMA Asstt. Prof., ECE department. MMEC, MM University, Mullana.

COURSE ECE-411 IMAGE PROCESSING. Er. DEEPAK SHARMA Asstt. Prof., ECE department. MMEC, MM University, Mullana. COURSE ECE-411 IMAGE PROCESSING Er. DEEPAK SHARMA Asstt. Prof., ECE department. MMEC, MM University, Mullana. Why Image Processing? For Human Perception To make images more beautiful or understandable

More information

Digital Image Fundamentals

Digital Image Fundamentals Digital Image Fundamentals Computer Science Department The University of Western Ontario Presenter: Mahmoud El-Sakka CS2124/CS2125: Introduction to Medical Computing Fall 2012 October 31, 2012 1 Objective

More information

Image Acquisition Hardware. Image Acquisition and Representation. CCD Camera. Camera. how digital images are produced

Image Acquisition Hardware. Image Acquisition and Representation. CCD Camera. Camera. how digital images are produced Image Acquisition Hardware Image Acquisition and Representation how digital images are produced how digital images are represented photometric models-basic radiometry image noises and noise suppression

More information

An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods

An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods 19 An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods T.Arunachalam* Post Graduate Student, P.G. Dept. of Computer Science, Govt Arts College, Melur - 625 106 Email-Arunac682@gmail.com

More information

CSE 564: Scientific Visualization

CSE 564: Scientific Visualization CSE 564: Scientific Visualization Lecture 5: Image Processing Klaus Mueller Stony Brook University Computer Science Department Klaus Mueller, Stony Brook 2003 Image Processing Definitions Purpose: - enhance

More information

Frequency Domain Enhancement

Frequency Domain Enhancement Tutorial Report Frequency Domain Enhancement Page 1 of 21 Frequency Domain Enhancement ESE 558 - DIGITAL IMAGE PROCESSING Tutorial Report Instructor: Murali Subbarao Written by: Tutorial Report Frequency

More information

CHAPTER 3 I M A G E S

CHAPTER 3 I M A G E S CHAPTER 3 I M A G E S OBJECTIVES Discuss the various factors that apply to the use of images in multimedia. Describe the capabilities and limitations of bitmap images. Describe the capabilities and limitations

More information

EE482: Digital Signal Processing Applications

EE482: Digital Signal Processing Applications Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 15 Image Processing 14/04/15 http://www.ee.unlv.edu/~b1morris/ee482/

More information

Digital Imaging and Image Editing

Digital Imaging and Image Editing Digital Imaging and Image Editing A digital image is a representation of a twodimensional image as a finite set of digital values, called picture elements or pixels. The digital image contains a fixed

More information

Aperture. The lens opening that allows more, or less light onto the sensor formed by a diaphragm inside the actual lens.

Aperture. The lens opening that allows more, or less light onto the sensor formed by a diaphragm inside the actual lens. PHOTOGRAPHY TERMS: AE - Auto Exposure. When the camera is set to this mode, it will automatically set all the required modes for the light conditions. I.e. Shutter speed, aperture and white balance. The

More information

Digital Image Processing. Lecture # 6 Corner Detection & Color Processing

Digital Image Processing. Lecture # 6 Corner Detection & Color Processing Digital Image Processing Lecture # 6 Corner Detection & Color Processing 1 Corners Corners (interest points) Unlike edges, corners (patches of pixels surrounding the corner) do not necessarily correspond

More information

CS 262 Lecture 01: Digital Images and Video. John Magee Some material copyright Jones and Bartlett

CS 262 Lecture 01: Digital Images and Video. John Magee Some material copyright Jones and Bartlett CS 262 Lecture 01: Digital Images and Video John Magee Some material copyright Jones and Bartlett 1 Overview/Questions What is digital information? What is color? How do pictures get encoded into binary

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Part : Image Enhancement in the Spatial Domain AASS Learning Systems Lab, Dep. Teknik Room T9 (Fr, - o'clock) achim.lilienthal@oru.se Course Book Chapter 3-4- Contents. Image Enhancement

More information

CS534 Introduction to Computer Vision. Linear Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University

CS534 Introduction to Computer Vision. Linear Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University CS534 Introduction to Computer Vision Linear Filters Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What are Filters Linear Filters Convolution operation Properties of Linear Filters

More information