[Ahaiwe, 2(8): August, 2013] ISSN: Impact Factor: INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY

Similar documents
Background. Computer Vision & Digital Image Processing. Improved Bartlane transmitted image. Example Bartlane transmitted image

ROBOT VISION. Dr.M.Madhavi, MED, MVSREC

ME 6406 MACHINE VISION. Georgia Institute of Technology

IMAGE PROCESSING PAPER PRESENTATION ON IMAGE PROCESSING

Locating the Query Block in a Source Document Image

Image Extraction using Image Mining Technique

STUDY NOTES UNIT I IMAGE PERCEPTION AND SAMPLING. Elements of Digital Image Processing Systems. Elements of Visual Perception structure of human eye

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring

Practical Image and Video Processing Using MATLAB

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and

A Novel Approach of Compressing Images and Assessment on Quality with Scaling Factor

EC-433 Digital Image Processing

PARAMETRIC ANALYSIS OF IMAGE ENHANCEMENT TECHNIQUES

Image acquisition. In both cases, the digital sensing element is one of the following: Line array Area array. Single sensor

OBJECTIVE OF THE BOOK ORGANIZATION OF THE BOOK

HISTOGRAM BASED AUTOMATIC IMAGE SEGMENTATION USING WAVELETS FOR IMAGE ANALYSIS

COURSE ECE-411 IMAGE PROCESSING. Er. DEEPAK SHARMA Asstt. Prof., ECE department. MMEC, MM University, Mullana.

ANALYSIS OF IMAGE ENHANCEMENT TECHNIQUES USING MATLAB

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS

ECC419 IMAGE PROCESSING

USE OF HISTOGRAM EQUALIZATION IN IMAGE PROCESSING FOR IMAGE ENHANCEMENT

Thermography. White Paper: Understanding Infrared Camera Thermal Image Quality

Abstract of PhD Thesis

SUPER RESOLUTION INTRODUCTION

Number Plate Recognition Using Segmentation

Chapters 1-3. Chapter 1: Introduction and applications of photogrammetry Chapter 2: Electro-magnetic radiation. Chapter 3: Basic optics

FACE RECOGNITION BY PIXEL INTENSITY

Understanding Infrared Camera Thermal Image Quality

Lecture # 01. Introduction

Exercise questions for Machine vision

Introduction. Lighting

Digital Photogrammetry. Presented by: Dr. Hamid Ebadi

Teaching Scheme. Credits Assigned (hrs/week) Theory Practical Tutorial Theory Oral & Tutorial Total

An Efficient Nonlinear Filter for Removal of Impulse Noise in Color Video Sequences

Efficient Construction of SIFT Multi-Scale Image Pyramids for Embedded Robot Vision

An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods

Digital Imaging Rochester Institute of Technology

VLSI Implementation of Image Processing Algorithms on FPGA

Camera Image Processing Pipeline: Part II

Real-Time Face Detection and Tracking for High Resolution Smart Camera System

Performance Evaluation of Edge Detection Techniques for Square Pixel and Hexagon Pixel images

Chapter 3 Graphics and Image Data Representations

Image Enhancement using Hardware co-simulation for Biomedical Applications

Keywords: Data Compression, Image Processing, Image Enhancement, Image Restoration, Image Rcognition.

COMPARATIVE PERFORMANCE ANALYSIS OF HAND GESTURE RECOGNITION TECHNIQUES

Figure 1 HDR image fusion example

Digital Image Processing

Image Enhancement using Histogram Equalization and Spatial Filtering

VLSI Implementation of Impulse Noise Suppression in Images

APPLICATION OF COMPUTER VISION FOR DETERMINATION OF SYMMETRICAL OBJECT POSITION IN THREE DIMENSIONAL SPACE

DIGITAL IMAGE PROCESSING (COM-3371) Week 2 - January 14, 2002

System and method for subtracting dark noise from an image using an estimated dark noise scale factor

Fundamentals of Multimedia

Design and Implementation of a Digital Image Processor for Image Enhancement Techniques using Verilog Hardware Description Language

Automatic Licenses Plate Recognition System

A NOVEL VISION SYSTEM-ON-CHIP FOR EMBEDDED IMAGE ACQUISITION AND PROCESSING

6. FUNDAMENTALS OF CHANNEL CODER

Digital Image Processing and Machine Vision Fundamentals

APPLICATIONS AND USAGE

speech signal S(n). This involves a transformation of S(n) into another signal or a set of signals

MAV-ID card processing using camera images

Digital Image Processing

Course Outcome of M.Tech (VLSI Design)

Module 3: Video Sampling Lecture 18: Filtering operations in Camera and display devices. The Lecture Contains: Effect of Temporal Aperture:

Image Smoothening and Sharpening using Frequency Domain Filtering Technique

Digital Image Processing. Lecture 1 (Introduction) Bu-Ali Sina University Computer Engineering Dep. Fall 2011

Video Enhancement Algorithms on System on Chip

CS 376b Computer Vision

How does prism technology help to achieve superior color image quality?

Digital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye

Digital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye

AUTOMATED MALARIA PARASITE DETECTION BASED ON IMAGE PROCESSING PROJECT REFERENCE NO.: 38S1511

Digital Image Processing

International Journal of Computer Engineering and Applications, TYPES OF NOISE IN DIGITAL IMAGE PROCESSING

Cvision 2. António J. R. Neves João Paulo Silva Cunha. Bernardo Cunha. IEETA / Universidade de Aveiro

Processing and Enhancement of Palm Vein Image in Vein Pattern Recognition System

Module 6: Liquid Crystal Thermography Lecture 37: Calibration of LCT. Calibration. Calibration Details. Objectives_template

Camera Image Processing Pipeline: Part II

Sensors and Sensing Cameras and Camera Calibration

BULLET SPOT DIMENSION ANALYZER USING IMAGE PROCESSING

Determining MTF with a Slant Edge Target ABSTRACT AND INTRODUCTION

Module 1: Introduction to Experimental Techniques Lecture 2: Sources of error. The Lecture Contains: Sources of Error in Measurement

ScanArray Overview. Principle of Operation. Instrument Components

Digital Photographic Imaging Using MOEMS

Towards Real-time Hardware Gamma Correction for Dynamic Contrast Enhancement

Starting a Digitization Project: Basic Requirements

A GENERAL SYSTEM DESIGN & IMPLEMENTATION OF SOFTWARE DEFINED RADIO SYSTEM

NON UNIFORM BACKGROUND REMOVAL FOR PARTICLE ANALYSIS BASED ON MORPHOLOGICAL STRUCTURING ELEMENT:

FSI Machine Vision Training Programs

Design and Implementation of High Speed Carry Select Adder Korrapatti Mohammed Ghouse 1 K.Bala. 2

HARDWARE SOFTWARE CO-SIMULATION FOR

Studying of Reflected Light Optical Laser Microscope Images Using Image Processing Algorithm

Experimental Analysis of Luminescence in Printed Materials

An Inherently Calibrated Exposure Control Method for Digital Cameras

CSE 166: Image Processing. Overview. What is an image? Representing an image. What is image processing? History. Today

A simple Technique for contrast stretching by the Addition, subtraction& HE of gray levels in digital image

4.5.1 Mirroring Gain/Offset Registers GPIO CMV Snapshot Control... 14

Unit 1 DIGITAL IMAGE FUNDAMENTALS

PRACTICAL IMAGE AND VIDEO PROCESSING USING MATLAB

Laser Beam Analysis Using Image Processing

Transcription:

IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY Digital Image Processing: An Overview of Computational Time Requirement Ahaiwe J Department of Information Management Technology, Federal University of Technology, Owerri Imo State,Nigeria fraircos@yahoo.com Abstract Image processing is a growing field covering a wide range of techniques for the manipulation of digital images [1].Many image processing tasks can be characterized as being computationally intensive. One reason for this is the vast amount of data that requires processing, more than seven million pixels per second for typical image sources. To keep up with these data rates and demanding computations in real-time, the processing engine must provide specialized data paths, application-specific operators, creative data management, and careful sequencing and pipelining. The paper is confined to the major drawback of digital image processing which is the computational time required and is also said to be the computational power of a digital image processing system. Keywords: pipelining computational time, data rate, image processing, data structure and algorithm. Introduction Interest in digital image processing methods stems from two principal application areas: improvement of pictorial information for human interpretation; and processing of image data for storage, transmission, and representation for autonomous machine perception [2]. Many image processing tasks can be characterized as being computationally intensive. One reason for this is the vast amount of data that requires processing -- more than seven million pixels per second for typical image sources. To keep up with these data rates and demanding computations in real-time, the processing engine must provide specialized data paths, application-specific operators, creative data management, and careful sequencing and pipelining. Early electronic computers were severely limited by both the speed of operations and the amount of memory available. In some cases it was realized that there was a space time tradeoff, whereby a task could be handled either by using a fast algorithm which used quite a lot of working memory, or by using a slower algorithm which used very little working memory. The engineering tradeoff was then to use the fastest algorithm which would fit in the available memory. Overview of Digital Image Processing System What is an Image? Image can be defined in many ways which depends on the perspective in which the definer is interested. 1. An optical counterpart of an object produced by an optical device (as a lens or mirror) or an electronic device.[3] 2. An image may be defined as a two-dimensional function, f(x, y), where x and y are spatial (plane) coordinates, and the amplitude of f at any pair of coordinates (x, y) is called the intensity or gray level of the image at that point. [2]. A color image is just a three vector-valued component function. We can write this as a vector valued function: Definition of Digital Image An image is said to be digital when f(x, y) that has been discreet both in spatial coordinate and in brightness. So a digital image can defined as a numeric representation (normally in binary) of a two dimensional image[4].it can also be defined as the electronic representations of images that is stored on a computer [5]. Types of digital image ranges from Digital photos to Electron-microscope images used to study material structure.

Digital Image Processing Digital Image processing is simply the process of using a computer to convert a digital image from its numerical representation to its output image. It can also be defined by others as: 1. The processing digital images by means of a digital computer [2]. 2. Digital image processing is the use of computer algorithms to perform image processing on digital images [4]. Digital Image Processing (DIP) involves the modification of digital data for improving the image qualities with the aid of computer. The processing helps in maximizing clarity, sharpness and details of features of interest towards information extraction and further analysis. Digital image processing plays a vital role in the analysis and interpretation of remotely sensed data. Especially data obtain from Satellite Remote Sensing, which is in the digital form, can best be utilized with the help of digital image processing system. Image enhancement and information extraction are two important components of digital image processing. Image enhancement techniques help in improving the visibility of any portion or feature of the image suppressing the information in other portions or features. Information extraction techniques help in obtaining the statistical information about any particular feature or portion of the image. An image processing system consists of a light source to illuminate the scene, a sensor system (usually a CCD-camera) and an interface between the sensor system and the computer. Among other things, the interface converts analog information into digital data which the computer can understand. This takes place in place in a special piece of hardware, the frame grabber, which also stores the image. Many types of frame grabber hardware are supplied with special signal processors, so that very calculation-intensive parts of the image processing programs can run in a time-efficient way. Usually the frame grabber package contains a library of often-used routines which can be linked to the user s program. The results of an image processing run will be transferred to the outside world by one or more I/O interfaces, the screen and the normal outside devices like printer, disks etc. Digital Image Processing Components The components of a basic, general-purpose digital image processing system are shown below. The operation of each block can be explained briefly: Figure1: Image Methods and Sensor System Source:[ 7] The term image processing suggests that the pictures which will be processed are taken by camera. This is often the case, but generally, every sensor which produces spatially distributed intensity values of electromagnetic radiation which can be digitized and stored in RAM is suited to image capturing. Various image capturing systems are used, depending on the application field. They differ in the Acquisition principle Acquisition Speed Spatial Resolution Sensor System Spectral Range Dynamic Range Apart from the area of consumer electronics, most apparatus are very costly. The greater the need for accuracy, the more hardware and software is used in the image capturing system. CCD sensors play a central role in most image processing system. They are part of a complex system which makes it possible to take images in problematic environments with the necessary quality and accuracy. CCD Camera In a film camera, the photo-sensitive film is moved in front of the lens, exposed to light, and then mechanically transported to be stored in a film roll. A CCD Camera has no mechanical parts. The incoming light falls on a CCD (Charge Coupled Device) sensor, which consists of numerous light-sensitive semiconductor elements, the so-called pixels. They are arranged in lines or a matrix. Image Sensor is the heart of a digital camera. High resolution and color accuracy as well as the signalto-noise ratio depend on the quality of the CCD Sensor. The Frame Grabber The electrical voltage signal produced by the sensor system will now be transferred to the frame grabber. The frame grabber is not identical to the graphic card in normal computers. It has to meet many more requirements. A frame grabber should be able to: Process the information from various image sources

Store image information quickly and efficiently Offer a graphic user interface Be flexible concerning various applications Elements of a Digital Image Processing System Image Processor A digital image processor is the heart of any image processing system. An image processor consists of a set of hardware modules that perform four basic functions; Image Acquisition Storage Low-level processing Display Typically, the image acquisition module has a TV signal as the input and converts this signal into digital form, both spatially and in amplitude. Most modern image processors are capable of digitizing a TV image in one frame-time (i.e.,1/30 th of a Second). For this reason, the image acquisition module is often referred to as a frame grabber. The storage module, often called a frame buffer, is a memory capable of storing an entire digital image. Usually, several such modules are incorporated in an image processor. The single most distinguishing characteristic of an image storage module is that the contents of the memory can be loaded or read at TV rates. The processing module performs low-level functions such as arithmetic and logic operation. Thus, this module is often called an Arithmetic-Logic Unit (ALU). It has a specialized hardware device designed specifically to gain speed by processing pixels in parallel. The function of the display module is to read an image memory, convert the stored digital information into an analog video signal, and output this signal to a TV monitor or other video device. Digitizers A digitizer converts an image into a numerical representation suitable for input into a digital computer. Among the most commonly used input devices are micro-densitometers, flying spot scanners, image dissectors, vidicon cameras, and photosensitive solidstate arrays. The first two devices require that the image to be digitized be in the form of a transparency (e.g.., a film negative) or photograph. Image dissectors, vidicon cameras, and solid-state arrays can accept images recorded in this manner but they have the additional advantage of being able to digitize natural images that have sufficient light intensity to excite the detector. Display and Recording Devices Monochrome and color television monitors are the principal display devices used in modern image processing systems. Monitors are driven by the output(s) of the image display module in the image processor. This signal can also be fed into an image recording device whose function is to produce a hard copy (slides, photographs, and transparencies) of the image being viewed on the monitor screen. Other display media include CRTs and printing devices. Printing image display devices are useful primarily for low-resolution image processing work. Computational Time Requirement Overview The first application of digital images was in the newspaper industry, when pictures were first sent by submarine cable between London and New York. The introduction of the Bartlane cable picture transmission system in the early 1920s reduced the time required to transport a picture across the Atlantic from a week to less than three hours. [2]. The world of high performance computing is a rapidly evolving field of study. Many options are open to businesses when designing a product. Some systems can provide astonishing performance using the hundreds of cores available. On the other hand, others can provide computational acceleration to many signal and data processing applications. The question arises as to what is the actual computational time required to process a digital image in real-time irrespective of the image type, size, software or hardware used. The amount of time required for the computation may vary based on the different benchmark algorithms of the various processing units and other factors. What is computational time? Computation time (also called "running time" or execution time ) is the length of time required to perform a computational process. Presenting a computation as a sequence of rule applications, the computation time is proportional to the number of rule applications.it is also defined as the amount of time the CPU actually uses in executing an instruction [6].The computation time of a digital image is a crucial issue. In addition to quantifying the number of operations per second, it is successful to consider how fast computations are performed relative to the frame rate of an input image. Some tasks (histogramming, median filtering, region labeling, Gaussian pyramid generation and morphological operations) are completed during one frame time. Others (8*8 convolution and lapcian pyramid generation) required two image frames. How Digital Image Works With Time Several aspect of image processing makes its computationally challenging, for example a single image represents a data set of considerable size typically 256kb picture elements, or pixels for a black and white image.

Many tasks require that several operations be performed on each pixel in the image which may take a lot of computational time. Furthermore when real time operations are needed, they must be performed at live video rates, typically 30 image per second. To keep up to these capacious data rates and demanding computations in real time, the processing must provide specialized data paths, application specific operators, creative data management and careful sequencing and pipelining. Factors That Affect the Computational Time of A Digital Image The computational time of a digital image processing mostly depend on the computational complexity (algorithm complexity). Complexity can be defined as a function which gives the running time and or space in terms of the input size to any algorithm [8]. According to [9], complexity of an algorithm can be of two types: Time Complexity and Space complexity. While Time Complexity refers to the analysis of required computational time needed to execute an algorithm, Space Complexity focuses on the analysis of algorithm for prediction of memory requirement to run the algorithm. In addition to the algorithm complexity; the computation time also depends on the Hardware design and the application software code. Hard ware design Hardware designers typically, perform extensive behavioral testing of new concepts before proceeding with an implementation. Due to the enormous processing time required to simulate a complex image processing system, execute a VHDL (verilog hardware description language) module with a representative data set even on a fast workstation is not practical. Days, even weeks are commonly needed to simulate the processing of a single full size image. And since some applications process sequence of images, designers may need several hundred image simulations to adequately analyze only a few seconds of data. VHDL is a hardware descriptive language used in electronic design automation to describe digital and mixed signal system such as Field Programmable Gate Array (FPGAs) VT splash is a real-time image processing system based on the splash 2 general purpose platform. Splash-2 is a second generation processor designed by the supercomputing research centre in Bowie, Maryland. Splash-2 is an attached processor featuring programmable processing elements (PEs) and computational part. Splash -2 systems uses array of RAM -based Field Programmable Gate Arrays (FPGAs), crossbar networks, and distributed memory to accomplish the needed flexibility and performance. Even though splash-2 was not designed specifically for image processing, its architectural properties are suited for computation and data transfer rates characteristics of this class of problems. The price /performance ratio of this system also makes it competitive with conventional real time image processing system. In the hardware design there are two main and critical signal paths. 1. Digitizing and writing the video data in the memory at high resolution rate, and 2. Once the data are ready in the memory, reading the data and generating the low resolution image. Thus the execution time mainly depends on (i) processor speed, and (ii) the time taken for fetching video information/data. How to Reduce the Computational Time of a Digital Image Processing 1. Processing time can be reduced by making a dedicated hardware instead of processor based hardware. Flexibility of changing parameters reduces. 2. By increasing the processor speed, execution time can be reduced. 3. By parallel processing, processing time can be further reduced. Hardware complexity increases, the dialog between the processors become more cumbersome than the algorithm. 4. Since the scaling factors are fixed, the address of each pixel can be pre-computed and stored in the memory. This reduces execution time. In summary, Due to the enormous amount of data involved in image/video processing, every operation counts, especially the time-consuming operations at the lower levels of the processing hierarchy. Thus, reduction in the number of operations plays a major role in achieving real-time performance. The strategy of pure operation reduction involves applying a transformation to reduce the number of operations, which does not change the numerical outcome. If the numerical outcome is changed, then the approach is referred to as either an approximation or a suboptimal/alternative solution. Any operation that has inherent symmetries or redundancies in its constituent computations is a candidate for applying this strategy. Application of this strategy manifests itself in uncovering hidden symmetries or redundancies within the computations, which can often be discovered by expanding the computations by hand and carefully noting any mathematical identities or properties.the strategy of approximations is similar to the strategy of reduction in computations in that approximations involve applying transformations to reduce the number of operations, but it differs from pure

computational reduction due to the presence of approximation. Conclusion REAL TIME IMAGE PROCESSING of data for the purpose of deriving a compact representation, which in turn speeds up subsequent stages of processing. Reduction of data takes many forms in real-time image/video processing systems including spatial or temporal down-sampling, spatial block partitioning, region of interest or selective processing, formulating the algorithm in a multi resolution or processing framework, appropriate feature extraction, etc. In all these cases, a certain subset of pixels from an image frame is processed. In general, the fundamental idea behind realtime image/video processing systems is the utilization of simple or simplified algorithms. A rule of thumb when transitioning to a real-time implementation is to keep things as simple as possible, that is to say, look for simple solutions using simple operations and computationally simple algorithms as opposed to complex, computationally intensive algorithms, which may be optimal from a mathematical viewpoint, but are not practical from a real-time point of view. With embedded devices now being out-fitted with vision capabilities, such as camera equipped cell phones and digital still/video cameras, the deployment of those algorithms which are not only computationally efficient but also memory efficient is expected to grow. Often, algorithms are carefully analyzed in terms of their number of operations and computational complexity, but equally important are their storage requirements. In essence, simple algorithms provide a means of meeting real-time performance goals by lowering computational burdens, memory requirements, and indirectly, power requirements as well. The increase in the practical applications of face detection and recognition has increased the interest in their real-time implementations. Such implementations are possible via the use of simple algorithms. For instance, a simple algorithm for face detection based on the use of skin color features was discussed. In order to make the detection algorithm robust to regions in the background with skin-like colors, a simplified method was developed. Reference [1] Richard Bridge, computer image processing Tessella Support Services plc, Issue V1.R2.M0, June 2003. [2] Rafael C. Gonzalez and Richard E. Woods, Digital Image Processing Third Edition, Prentice Hall, Upper Saddle River, NJ, 2007 [3] Merriam-Webster dictionary, Image. [4] www.wikiepidia.com. Digital image, 2013.; digital image processing, 2013 [5] www.readding.ac.uk, Digital image, 2013. [6] Webopedia.com. [7] Erhardt-Ferron A. Theory and application of digital Image Processing. University of applied sciences Offenburg. www.dip-seminaronline.com/english pp1. [8] Verma Santosh (2011): Data Structure and Algorithms. First edition, Acme Learning Private Limited,New Delhi,India [9] Bhaumik A.K, Haldar S and Roy S.S.(2010): Data Structures Using-C. First edition,s.chand and Company LTD.,New Delhi India