APPLICATION OF COMPUTER VISION FOR DETERMINATION OF SYMMETRICAL OBJECT POSITION IN THREE DIMENSIONAL SPACE

Similar documents
ECC419 IMAGE PROCESSING

Images and Graphics. 4. Images and Graphics - Copyright Denis Hamelin - Ryerson University

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS

ROBOT VISION. Dr.M.Madhavi, MED, MVSREC

Introduction to computer vision. Image Color Conversion. CIE Chromaticity Diagram and Color Gamut. Color Models

Introduction to DSP ECE-S352 Fall Quarter 2000 Matlab Project 1

Detection and Verification of Missing Components in SMD using AOI Techniques

Image Extraction using Image Mining Technique

Digital Image Processing. Lecture # 6 Corner Detection & Color Processing

For a long time I limited myself to one color as a form of discipline. Pablo Picasso. Color Image Processing

Computer Vision. Howie Choset Introduction to Robotics

Photography PreTest Boyer Valley Mallory

CSC 170 Introduction to Computers and Their Applications. Lecture #3 Digital Graphics and Video Basics. Bitmap Basics

ME 6406 MACHINE VISION. Georgia Institute of Technology

Color Image Processing

Image Perception & 2D Images

Chapter 12 Image Processing

INSTITUTIONEN FÖR SYSTEMTEKNIK LULEÅ TEKNISKA UNIVERSITET

MATLAB Image Processing Toolbox

Fig Color spectrum seen by passing white light through a prism.

Practical Image and Video Processing Using MATLAB

Unit 8: Color Image Processing

AGRICULTURE, LIVESTOCK and FISHERIES

Digital images. Digital Image Processing Fundamentals. Digital images. Varieties of digital images. Dr. Edmund Lam. ELEC4245: Digital Image Processing

Digital Photogrammetry. Presented by: Dr. Hamid Ebadi

Segmentation Plate and Number Vehicle using Integral Projection

Introduction to Color Theory

Face Detection System on Ada boost Algorithm Using Haar Classifiers

Digital Image Processing. Lecture # 8 Color Processing

CS 565 Computer Vision. Nazar Khan PUCIT Lecture 4: Colour

An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods

Solution Q.1 What is a digital Image? Difference between Image Processing

Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems

Image Processing (EA C443)

from: Point Operations (Single Operands)

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and

Digital Image Processing

Performance Evaluation of Edge Detection Techniques for Square Pixel and Hexagon Pixel images

Object Perception. 23 August PSY Object & Scene 1

CS 548: Computer Vision REVIEW: Digital Image Basics. Spring 2016 Dr. Michael J. Reale

Fundamentals of Multimedia

Digital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye

Digital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye

Abstract shape: a shape that is derived from a visual source, but is so transformed that it bears little visual resemblance to that source.

Lane Detection in Automotive

Digital Image Processing

Basic Digital Image Processing. The Structure of Digital Images. An Overview of Image Processing. Image Restoration: Line Drop-outs

Human Vision and Human-Computer Interaction. Much content from Jeff Johnson, UI Wizards, Inc.

Introduction. Prof. Lina Karam School of Electrical, Computer, & Energy Engineering Arizona State University

Background. Computer Vision & Digital Image Processing. Improved Bartlane transmitted image. Example Bartlane transmitted image

Color. Used heavily in human vision. Color is a pixel property, making some recognition problems easy

Acquisition and representation of images

Color and More. Color basics

Image Processing for feature extraction

DIGITAL IMAGE PROCESSING (COM-3371) Week 2 - January 14, 2002

2. Color spaces Introduction The RGB color space

COURSE ECE-411 IMAGE PROCESSING. Er. DEEPAK SHARMA Asstt. Prof., ECE department. MMEC, MM University, Mullana.

Digital Image Processing. Lecture 1 (Introduction) Bu-Ali Sina University Computer Engineering Dep. Fall 2011

Overview. Pinhole camera model Projective geometry Vanishing points and lines Projection matrix Cameras with Lenses Color Digital image

Image Processing Lecture 4

Removal of Gaussian noise on the image edges using the Prewitt operator and threshold function technical

Introduction. The Spectral Basis for Color

Assistant Lecturer Sama S. Samaan

Automatic Electricity Meter Reading Based on Image Processing

Image Processing Final Test

Design Concept of State-Chart Method Application through Robot Motion Equipped With Webcam Features as E-Learning Media for Children

Understand brightness, intensity, eye characteristics, and gamma correction, halftone technology, Understand general usage of color

Chapter 8. Representing Multimedia Digitally

Digital Image Processing Lec.(3) 4 th class

MODULE 4 LECTURE NOTES 4 DENSITY SLICING, THRESHOLDING, IHS, TIME COMPOSITE AND SYNERGIC IMAGES

William B. Green, Danika Jensen, and Amy Culver California Institute of Technology Jet Propulsion Laboratory Pasadena, CA 91109

LECTURE 02 IMAGE AND GRAPHICS

Image Processing by Bilateral Filtering Method

COMPARATIVE PERFORMANCE ANALYSIS OF HAND GESTURE RECOGNITION TECHNIQUES

RGB colours: Display onscreen = RGB

Adaptive Fingerprint Binarization by Frequency Domain Analysis

1. (a) Explain the process of Image acquisition. (b) Discuss different elements used in digital image processing system. [8+8]

Image Enhancement in the Spatial Domain (Part 1)

IMAGE PROCESSING PAPER PRESENTATION ON IMAGE PROCESSING

Chapters 1 & 2. Definitions and applications Conceptual basis of photogrammetric processing

Radionuclide Imaging MII 3073 RADIONUCLIDE IMAGING SYSTEM

SPECTRAL SCANNER. Recycling

DESIGN & DEVELOPMENT OF COLOR MATCHING ALGORITHM FOR IMAGE RETRIEVAL USING HISTOGRAM AND SEGMENTATION TECHNIQUES

Image acquisition. In both cases, the digital sensing element is one of the following: Line array Area array. Single sensor

Cvision 2. António J. R. Neves João Paulo Silva Cunha. Bernardo Cunha. IEETA / Universidade de Aveiro

Acquisition and representation of images

Additive Color Synthesis

Digitizing Color. Place Value in a Decimal Number. Place Value in a Binary Number. Chapter 11: Light, Sound, Magic: Representing Multimedia Digitally

Chapter 17. Shape-Based Operations

Correlation of Nelson Mathematics 2 to The Ontario Curriculum Grades 1-8 Mathematics Revised 2005

6 Color Image Processing

Application of GIS to Fast Track Planning and Monitoring of Development Agenda

Computer Graphics. Si Lu. Fall er_graphics.htm 10/02/2015

Exercise questions for Machine vision

Lecture 3: Grey and Color Image Processing

An Approach for Reconstructed Color Image Segmentation using Edge Detection and Threshold Methods

Image and Multidimensional Signal Processing

Implementation of Colored Visual Cryptography for Generating Digital and Physical Shares

Image Processing and Particle Analysis for Road Traffic Detection

Introduction to More Advanced Steganography. John Ortiz. Crucial Security Inc. San Antonio

Transcription:

APPLICATION OF COMPUTER VISION FOR DETERMINATION OF SYMMETRICAL OBJECT POSITION IN THREE DIMENSIONAL SPACE Najirah Umar 1 1 Jurusan Teknik Informatika, STMIK Handayani Makassar Email : najirah_stmikh@yahoo.com 1 Abstract This research aimed to make an application of capturing and processing pictures that captured by webcam and process using grayscale image and binary image to determine the position of object. The application was made using Delphi program and planned to classify object pixels to become regions which represent the object and could distinguish between object and digital background. The method used was design method. The data were obtained from the results of system test. They were then analyzed using use-case model and operation sequence. Based on the analysis, design and implementation models were done. The result show that the application built can be used to capture pictures and process them to determine the position of symmetrical object in three dimensional space Key words : computer vision, grayscale image, binary image 1. INTRODUCTION The use of computers today is one of the world's needs in science and technology, business and personal needs because basically computer is a tool in solving problems that are routine in all aspects of human life. The development of today's computers very rapidly along with the development of technology hardware and software.the development is followed by the more widespread use of computers in various fields. Computer graphics is a field of computer science that studies on ways to improve and facilitate communication between man and machine (computer) with a street generate, store and manipulate images, using a computer model of an object. Computer graphics allow users to communicate through pictures, charts, diagrams which show that computer graphics can be applied to many fields (Insap Santosa, 2004). One area that is developing is the field of image processing. With an assortment of textures and colors, an image or picture can be present information as you wish. In the real world, a person's ability to absorb the information easier to 192 read or analyze images compared with a set of words or figures presented (Soendoro Herlambang, 2004). Computer Vision try to mimic how the human visual (human vision). Human Vision is actually a very complex man looking at objects with the sense of vision (eye), then the image of the object is transmitted to the brain for a range of interpretation so that people understand what the object looks in the eye. The result of this decision is used for decision making, for example, to avoid an existing object or determine the position of an object, especially a symmetrical object. Symmetrical object is an object that has the distance and the same angle when viewed from different directions in space. Symmetrical balance can be figured as a mirror balance, meaning, the opposite sides must be exact to create a balance. When the straight line drawn in the middle of it, part of which one will be a reflection of the other. Computer Vision is the techniques to estimate the characteristics of the object in the image, measurement characteristics associated with the object geometry and interpret the geometry information such as determine the position of the object, which is represented by the horizontal position of the X axis, the vertical position represented by the Y axis and the distance from the camera to an object point is represented by the Z axis which are in three-dimensional space In Computer Vision process can be divided into three activities, namely: a. Obtain or acquire digital images b. Perform computational techniques for processing or modify the image data (image processing operations). c. Analyze and interpret the image and use the results of such processing with the specific purpose guide the robot, control equipment (Rinaldi Munir, 2004). 2. LITERATUREREVIEW. 2.1 Computer Vision Computer Science is the systematic study of algorithmic processes that describe and transform information, whether it relates to theories, analysis, design, efficiency, implementation, or applications that are available to him. One area of

computer science is Computer Vision.Computer Vision is an automated process that integrates a large number of processes for visual perception, such as data acquisition, image processing, classification, recognition (recognition), and make decisions (Adrian Low 1991).Computer Vision are techniques to estimate the characteristics of the object in the image, measurement characteristics associated with the object geometry and interpret the geometry information (Jain, Rames1995). Computer vision is a branch of artificial intelligence (artificial intelligence) that is focused on the development of algorithms to analyze information from the image into the form of the actual information in the real world.. The role of computer vision is as one of the input data for the computer to be able to understand the circumstances around him. Then the input data that has been obtained, will be processed in a way that computers can provide the desired response to determine how the presentation of the results of the input data. Figure 1. Comparison of Computer Vision and Computer Graphics The function of computer vision is to present realworld information into the image information. Here are some problems in computer vision which is the mainfocus: 1. Sensing, How sensors acquire images of the outside world (World View) including the property of the world such as material, shape, and illumination. Even in 3D, including the geometry, texture, motion, and the identity of the objects in it is stored so that it can be used by a computer. 2. Decoded Information, How to open up and take any information that is in the image so that the computer can obtain all information as complete. 3. Using the information, Choosing what information is really needed and should be prioritized over the other. It should also be selected what information is in the image that it must be discarded because it can disrupt the system. Algorithm what is needed to process the information of the image and how to use it. Some of the subjects of science that utilizes computer vision, among others: a. Face recognition (face recognition) b. 3D reconstruction (reconstruction of threedimensional structure) c. Motion tracking (tracking movement) Computer Vision is another application that is associated with artificial intelligence, which is a tool of analysis and evaluation of visual information by using a computer. Artificial techniques Integensia allows computers to get to know a picture and identify objects. By using tracking and matching techniques, the computer can choose a special key and search for and identify the information that the human eye can not miss. To help users solve a problem or make a decision, computer vision software Artificial Intelligence had learned through visual information. A visual system has the ability to improve the useful information from an image. To improve the knowledge and information required projection geometry of the object from an image. Science that is concerned with the visual system since it was first developed to date, generating new techniques are constantly being developed both for the purpose of improvement of accuracy and to improve the speed of the process. One is the development of image processing which is a separate field that is developing since people understand that the computer does not only deal with text but also the image data (image). Image processing techniques are usually used to make the transformation from one image to another image, while the task of information lies in the human improvement through the development of the algorithm. This field includes image enhancement, protrusion of certain features of an image, image compression and image correction. Instead visual system uses image as input but produce other types of output such as the representation of the object contours in the image, or produce a movement of a mechanical device that is integrated with the visual system. So the emphasis on the visual system is improved and retrieval of information automatically with minimal human interaction. Image processing algorithms are very useful in the early development of the visual system, usually used to sharpen certain information in the image, before it is processed further. Computer graphics through graphical programming produces images of primitive geometric shapes such as points, straight lines and curved lines, circles and forms the basis of other geometry. Computer graphics play an important role in the visualization. While the visual system works in reverse, suspect primitive geometric shapes and other characteristics that a simplification of the original image that is more complex. So computer graphics combines elements of image forming to shape or synthesize images while the visual system to analyze the image and sometimes 193

break them down into a simple form that can be assessed quantitatively. Image Processing Image (Image) is a two-dimensional image on a plane (two-dimensional). Judging from the mathematical point of view, the image is the successor function of light intensity on the two-dimensional plane. The light source illuminates an object, the object reflects back some of the light and captured by optical instruments like the human eye, camera, scanner (scanner) and so on, so the shadow of the object called this image recorded (Rinaldi Munir, 2004). Image processing step is used to improve the image of the disorder to be easily interpreted either by human or by a computer that aims to improve the image quality to be better (Rinaldi Munir, 2004). Image processing techniques to transform the image into another image, so the input is the output image and also the image, but the image output has better quality than the input image. Image processing (image processing) is a computational science that allows humans can retrieve information from an image, can not be separated from the field of computer vision.in accordance with its development there are two main objectives, namely: (1) Improving the quality of the image The result is that the image information interpreted by humans (human perception). (2) extracting salient information in an image The result is a characteristic of image information through massive numerical data that can be clearly distinguished (Achmad Basuki, 2005). Image processing (image processing) is a process filters the original image into another image as needed. For example, if it gets too dark image, the image processing can be processed in order to get a clear picture as illustrated in the block diagram (RJ Sigit, 2005). Digital image is an image taken by a particular sampling and quantization, formed of pixels whose magnitude depends on the sampling and the value depends on the degree of gray and quantization. Model of digital image expressed in matrix form, the image is defined as a function of (x, y) where x and y stated number row and the column f declare the value of the degree of gray of the image. Matrix models in digital imagery allows for matrix operations. The image is a spatial dimension that contains the color information and does not depend on time. The image is a set of points of the image, called a pixel (picture element). These points illustrate the position coordinates and an intensity that can be expressed with numbers. This shows the intensity of the color image, through the summation (Red, Green and Blue / RGB). Figure 2. Schematic RGB Color Cube Coordinates of the pixel color information based; Brightness (sharpness) light colors (black, gray, white) of the source, Hue (hue) caused by the color (red, yellow, green, etc.) and is the dominant wavelength of the source. For example, the image with 8 bits per pixel has 256 colors and images with 24 bits per pixel is expressed by; - Bits 0 to 7 for red (red) - Bit 7 to 15 for the green color (green) - 16 to 24 bits for blue (blue) Possible combinations of colors that there are 16,777,216, of which the value of 0 indicates black, while the value of 16 777 216 expressed in white. Relationship with the distribution of image processing in the field of computer input and output involving particular can be described in the following table: Table.1 Relations Image Processing Output Image Input Image Image processing Computer Graphics Description - Pattern Recognition - Computer Vision Data Processing Other In the above table it is clear that the image processing (image processing) is a field of knowledge which form the input image and the result is also in the form of images with the process in the form of improved image quality or image information presentation. In order for the results in the form of numerical data or text that states the information contained in the image required knowledge learned in pattern recognition and computer vision. image Digitization In order to be processed by a digital computer, then an image to be presented numerically with discrete values. Image representation of continuous functions (continuous) into discrete values is called digitization, the image produced is called a digital image. In 194

general, the digital image-dimensional rectangular size is expressed as the width or height x width x length. Digital image height N, the width M, and has a degree of gray L can be expressed as a function of: 0 x M f (x, y) 0 y N 0 f L 3. SCENARIO TRIAL In carrying out this research will be to design tools and systems to make taking pictures to be made in the block diagram designed as follows: number of layers in the layer color image is r, the layer b, and layer g into one layer is gray (gray scale). The smallest unit of digital data is a bit, which is a binary digit, 0 or 1. The set of data is a 8 bit data is a data unit called a byte, with a value of 0-255.Pixel (picture element) is a point which is the least element small in the image. Numerical figure (1 byte) of pixels called digital number (DN). Digital Number can be displayed in gray color, ranging between white and black (gray scale), depending on the level of energy is detected. Pixels are arranged in the correct order to form an image. The figure below shows the relationship between the degree of gray and a digital number and degree of gray that make up an image. Figure 3 Block diagram of the hardware Image processing applications are designed aiming to determine the position of the object by using a webcam capture by utilizing Delphi component in image capture and display images that are arranged in a block diagram as follows: gra Bin Pene ntua gra Bin Figure 4 Design Software Pene ntua Pene ntua a. Taking pictures with a webcam using an application program that utilizes Delphi component form: 1). TtsCap32 is a component to display images of the moving image 2). TtsCap32PopupMenu is a component to adjust how shooting 3). TtsCap32Dialogs is a component to set the format of the image to be captured. b. Changing the image to gray scale and binary image Images that have been captured in the form of color images were processed using the application program into a gray image (gray scale) by means of pixel values that exist in the form of image color images are averaged and then divided into three according to the Figure 5 Relationship with the degree of gray Digital Number algorithm System Algorithm for Designing Application object positioning is as follows: 1). Creating an image retrieval application program 2).Mengcapture picture using the webcam 3). Displays images that have been captured in the form of still image 4). Changing the color image to grayscale. 5). Changing the Gray Scale image into a binary image 6). Determine the coordinates of four points. 7). Determine the position of the X, Y, Z 4. RESULTS AND DISCUSSION The main function of this system is to determine the position of objects in image processing applications, the procedure is to take a picture (color image) by using a webcam, change to gray image and binary image, determine the coordinates of four points, determines the position of the object. The initial process for determining the position of an object in three-dimensional space is the object which results in capturing color 195

images, then changed into a gray image (gray scale), the following procedure: 1. Position the object in the desired position 2. Activate the program shooting 3. Connect the webcam one and two with computers 4. Calibrate the camera one and two 5. Showing the screen 6. Capturing images 7. Change the color image into a grayscale image by means of a three-layer adds value is the value of r, g and b values values then divided by three to produce a grayscale image (gray) with the following formula: This process aims to classify pixels into regions that present the object object that distinguish the object with the background. The image has been converted to grayscale followed by binerisasi only be 0 and 1, In a binary image, the boundary between the object and the background is evident. White object pixels are pixels black background. To determine the binary value of grayscale images that have a degree of gray 256 divided by two, the middle value is 128 so as to convert into a binary image can be written as follows: If the gray values <128 then it is equal to 0 If the gray values> = 128 then the value is equal to 1 The process of converting a color image into a grayscale image and binary image, the next process is to determine the coordinates of the four points of the form x1-y1, x2-y2, x3-y3, x4-y4, with the following procedures: 1. The results of image capture in the form of a binary image subsequently processed to determine the position of a point coodinates x1y1 by tracking pixels are worth 1 starting from the coordinates (0,0) that is located on the top left side of the binary image, which made repeated until the pixelvalue 1 first, then becomes the value x1y1. Conditions tracking is if the coordinates of pixels is 0 then the search is continued until found the coordinates of pixels of value 1. 2. After x1y1 pixel value is found, it will be traceable to find pixels of value 1 which is the pixel coordinates of the nearest column of the image matrix boundary, selanjutnnya be X2Y2 value. Conditions tracking is if the coordinates of pixels is 0 then the search is continued until found the coordinates of pixels of value 1. 3. Tracking continued to coordinate matrix of pixels of the binary image to find the pixel value of 1, which is the farthest from the line of pixels, which is subsequently used as the value x3y3. Conditions tracking is if the coordinates of pixels is 0 then the search is continued until found the coordinates of pixels of value 1. 4. After x3y3 pixel value is found, continue tracking to find the coordinates of the pixels of value 1 which is located on the most recent image of the matrix, then used as the value x4y4. Conditions tracking is if the coordinates of pixels is 0 then the search is continued until found the coordinates of pixels of value 1.Once the coordinates of four points is obtained, then proceed with determining the position of the X, Y, Z are processed as follows: a. The results capture the images displayed on the camera in the form of a binary image, and has been processed into a four-point coordinates, then the pixels are the coordinates x1 first discovered during object tracking pixel value 1 further used as the value of x, because it is the first value is obtained that parallel to the x-axis in threedimensional space. b. The binary image produced by the camera and the camera two, used as the value of y with the value of y1 on camera 1 and camera 2 value y1 at the same, then the value is taken to be the value of y chosen one way, less the value of pixel y4 y1 pixel values to obtain the value of y, because the value of the coordinate parallel to the y-axis in threedimensional space. c. For the Z value is taken from the captured image on the camera 2 is parallel to the Z axis of the coordinate value x1 x1y1 of the coordinates of four points 196

Figure 6 Position Determination Process Objects 5. CONCLUSION 1. Composed of an algorithm to process the digital image of the object color image has been captured, processed by using a gray scale image by means of a color image composed of 3 layers matrix by summing the RGB values and then divided by three, and the result is a grayscale image layer with ranges gray values from 0 to 255, from the grayscale image is converted into the form of a binary image where the object value is 1 and the background is 0. 2. It has been successfully designing image processing application program to determine the position of the object. 6. REFERENCES [1] Achmad Basuki, et al, 2005. Digital Image Processing using Visual Basic, First Edition, London: Graha Science [2] Adi Nugroho, 2005. Rational Rose for Object- Oriented Modeling, First Edition, New York: Information [3] Balza Ahmad and Kartika Firdausy, 2005. Digital Image Processing Technique Using Delphi, Yogyakarta: Ardi Publishing [4] Bambang Robi`in, 2004. Multi Media Graphics Programming with Delphi, Yogyakarta: Andi Offset [5] Eru Puspita, Detection and Tracking System Realtime Face (Online) http://www.ies.eepisits.edu/index.php 197