QUALITY CHECKING AND INSPECTION BASED ON MACHINE VISION TECHNIQUE TO DETERMINE TOLERANCEVALUE USING SINGLE CERAMIC CUP

Similar documents
PROTOTYPE OF MANGO INSPECTION AND LABELING USING IMAGE PROCESSING TECHNIQUE

Exercise questions for Machine vision

Real-Time Face Detection and Tracking for High Resolution Smart Camera System

ME 6406 MACHINE VISION. Georgia Institute of Technology

OPEN CV BASED AUTONOMOUS RC-CAR

Image Processing : Introduction

Image Extraction using Image Mining Technique

Motion Detector Using High Level Feature Extraction

THERMAL IMAGING ANALYSIS OF POTENTIALLY HARMFUL SUBJECT FOR NIGHT VISION SYSTEM

SCIENCE & TECHNOLOGY

CHAPTER-4 FRUIT QUALITY GRADATION USING SHAPE, SIZE AND DEFECT ATTRIBUTES

SINCE2011 Singapore International NDT Conference & Exhibition, 3-4 November 2011

FSI Machine Vision Training Programs

Automatic Electricity Meter Reading Based on Image Processing

An Electronic Eye to Improve Efficiency of Cut Tile Measuring Function

Face Detection System on Ada boost Algorithm Using Haar Classifiers

Available online at ScienceDirect. Ehsan Golkar*, Anton Satria Prabuwono

Preprocessing and Segregating Offline Gujarati Handwritten Datasheet for Character Recognition

Implementation of License Plate Recognition System in ARM Cortex A8 Board

Malaysian Car Number Plate Detection System Based on Template Matching and Colour Information

Jurnal Teknologi THE EFFECTIVENESS OF FISH LENGTH MEASUREMENT SYSTEM USING NON-CONTACT MEASURING APPROACH. Full Paper

Lane Detection in Automotive

WHITE PAPER. Methods for Measuring Flat Panel Display Defects and Mura as Correlated to Human Visual Perception

Drink Bottle Defect Detection Based on Machine Vision Large Data Analysis. Yuesheng Wang, Hua Li a

ACEEE Int. J. on Electrical and Power Engineering, Vol. 03, No. 02, May 2012

AUTOMATIC NUMBER PLATE DETECTION USING IMAGE PROCESSING AND PAYMENT AT TOLL PLAZA

MICROCHIP PATTERN RECOGNITION BASED ON OPTICAL CORRELATOR

An Approach for Reconstructed Color Image Segmentation using Edge Detection and Threshold Methods

Concealed Weapon Detection Using Color Image Fusion

Image Processing Lecture 4

PCB Fault Detection by Image Processing Tools: A Review

Color Image Processing

Characterization of LF and LMA signal of Wire Rope Tester

Processing and Enhancement of Palm Vein Image in Vein Pattern Recognition System

Displacement Measurement of Burr Arch-Truss Under Dynamic Loading Based on Image Processing Technology

Implementing RoshamboGame System with Adaptive Skin Color Model

High-speed Micro-crack Detection of Solar Wafers with Variable Thickness

Automatic Licenses Plate Recognition System

NON UNIFORM BACKGROUND REMOVAL FOR PARTICLE ANALYSIS BASED ON MORPHOLOGICAL STRUCTURING ELEMENT:

Geometric Feature Extraction of Selected Rice Grains using Image Processing Techniques

AN EFFICIENT APPROACH FOR VISION INSPECTION OF IC CHIPS LIEW KOK WAH

Measuring Leaf Area using Otsu Segmentation Method (LAMOS)

Follower Robot Using Android Programming

Integrated Digital System for Yarn Surface Quality Evaluation using Computer Vision and Artificial Intelligence

Student Attendance Monitoring System Via Face Detection and Recognition System

FACE RECOGNITION BY PIXEL INTENSITY

Target Recognition and Tracking based on Data Fusion of Radar and Infrared Image Sensors

LabVIEW based Intelligent Frontal & Non- Frontal Face Recognition System

Implementation of Band Pass Filter for Homomorphic Filtering Technique

Colour Profiling Using Multiple Colour Spaces

FLUORESCENCE MAGNETIC PARTICLE FLAW DETECTING SYSTEM BASED ON LOW LIGHT LEVEL CCD

Image Recognition for PCB Soldering Platform Controlled by Embedded Microchip Based on Hopfield Neural Network

ROBOT VISION. Dr.M.Madhavi, MED, MVSREC

Detection of Bare PCB Defects by Image Subtraction Method using Machine Vision

Detection and Verification of Missing Components in SMD using AOI Techniques

Study and Analysis of various preprocessing approaches to enhance Offline Handwritten Gujarati Numerals for feature extraction

How does prism technology help to achieve superior color image quality?

Various Calibration Functions for Webcams and AIBO under Linux

A Fruit Quality Management System Based On Image Processing

AUTOMATIC LICENSE PLATE RECOGNITION USING IMAGE PROCESSING AND NEURAL NETWORK

An Evaluation of Automatic License Plate Recognition Vikas Kotagyale, Prof.S.D.Joshi

An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods

Iris Recognition using Histogram Analysis

VEHICLE LICENSE PLATE DETECTION ALGORITHM BASED ON STATISTICAL CHARACTERISTICS IN HSI COLOR MODEL

Vision for Robotics Lab session 8 CAMSHIFT

X-RAY COMPUTED TOMOGRAPHY

Number Plate Recognition System using OCR for Automatic Toll Collection

AUTOMATED COLOR SENSOR SYSTEM USING LDR AND RGB LEDS CONTROLLED BY ARDUINO

International Journal of Advance Engineering and Research Development CONTRAST ENHANCEMENT OF IMAGES USING IMAGE FUSION BASED ON LAPLACIAN PYRAMID

More Info at Open Access Database by S. Dutta and T. Schmidt

Research on Application of Conjoint Neural Networks in Vehicle License Plate Recognition

Perception. Introduction to HRI Simmons & Nourbakhsh Spring 2015

Capturing and Editing Digital Images *

Proposed Method for Off-line Signature Recognition and Verification using Neural Network

RECOGNITION OF EMERGENCY AND NON-EMERGENCY LIGHT USING MATROX AND VB6 MOHD NAZERI BIN MUHAMMAD

A Vehicle Speed Measurement System for Nighttime with Camera

CHAPTER 4 LOCATING THE CENTER OF THE OPTIC DISC AND MACULA

Locating the Query Block in a Source Document Image

MONITORING AND ANALYSIS OF PGMAW. Stefan Nordbruch 1,2 and Axel Gräser 1

Review and Analysis of Image Enhancement Techniques

LOW-LEVEL PROCESSING APPROACH USING GUI FOR ULTRASONIC C-SCAN DEFECTS DETERMINATION OF COMPOSITE MATERIAL

UM-Based Image Enhancement in Low-Light Situations

COMPARATIVE PERFORMANCE ANALYSIS OF HAND GESTURE RECOGNITION TECHNIQUES

IMAGE ANALYSIS BASED CONTROL OF COPPER FLOTATION. Kaartinen Jani*, Hätönen Jari**, Larinkari Martti*, Hyötyniemi Heikki*, Jorma Miettunen***

APPLICATION OF COMPUTER VISION FOR DETERMINATION OF SYMMETRICAL OBJECT POSITION IN THREE DIMENSIONAL SPACE

Analysis and Identification of Rice Granules Using Image Processing and Neural Network

System NMI. Accuracy is the Key. Classifying the Content of Non-metallic Inclusions in Steel in Accordance with Current Industrial Standards

Identification of Fake Currency Based on HSV Feature Extraction of Currency Note

Speed and Image Brightness uniformity of telecentric lenses

A CMOS Visual Sensing System for Welding Control and Information Acquirement in SMAW Process

Image Processing and Particle Analysis for Road Traffic Detection

How interference filters can outperform colored glass filters in automated vision applications

Number Plate Detection with a Multi-Convolutional Neural Network Approach with Optical Character Recognition for Mobile Devices

Technical Explanation for Displacement Sensors and Measurement Sensors

Utilization of Digital Image Processing In Process of Quality Control of The Primary Packaging of Drug Using Color Normalization Method

DESIGN & DEVELOPMENT OF COLOR MATCHING ALGORITHM FOR IMAGE RETRIEVAL USING HISTOGRAM AND SEGMENTATION TECHNIQUES

Digital Photographic Imaging Using MOEMS

Cvision 2. António J. R. Neves João Paulo Silva Cunha. Bernardo Cunha. IEETA / Universidade de Aveiro

How to define the colour ranges for an automatic detection of coloured objects

Automated hand recognition as a human-computer interface

Transcription:

QUALITY CHECKING AND INSPECTION BASED ON MACHINE VISION TECHNIQUE TO DETERMINE TOLERANCEVALUE USING SINGLE CERAMIC CUP Nursabillilah Mohd Alie 1, Mohd Safirin Karis 1, Gao-Jie Wong 1, Mohd Bazli Bahar 1, Marizan Sulaiman 1, Masrullizam Mat Ibrahim 2 and Amar Faiz Zainal Abidin 3 1 Faculty of Electrical Engineering, Universiti Teknikal Malaysia Melaka, Malaysia 2 Faculty of Electronics and Computer Engineering, Universiti Teknikal Malaysia Melaka 3 Faculty of Electrical Engineering, Universiti Teknologi MARA, Pasir Gudang, Johor, Malaysia E-Mail: nursabillilah@utem.edu.my ABSTRACT The development of an algorithm for inspection and quality checking using machine vision was discussed in this paper. The design of the algorithm is to detect the sign of defect when a sample of the product is used for inspection purposes. It is also designed to track specific colour of product and conduct the inspection process. Programming language of python and open source computer vision library were used to design the inspection algorithm based on the algorithm required to achieve the inspection task. Illumination and surrounding environment were considered during the design as it may affect the quality of image acquisitioned by image sensor. Experiment and set-up by using CMOS image sensor were conducted to test the designed algorithm for effectiveness evaluation. The experimental results were obtained and are represented in graphical form for further analysis purposes. Besides, analysis and discussion were made based on the obtained results through the experiments. The designed algorithm is able to perform the inspection by sample object detection and differentiate between good and defect unit. Keywords: machine vision, quality checking, inspection, mark detection. INTRODUCTION There are numbers of method used in the modern industry depends on the requirement of inspection. Different manufacturing process typically needs a specially designed machine vision for high performance in the inspection process with least expenses on their inspection system. Internal checking usually involves in food industry where it is required to see if the food is packed correctly in place or position, the amount of food or contains to fulfil the production requirement. Whereas, external inspection typically involves the packaging and printing on a product if is there any damages or printing error occur. Thus, many different criteria of an inspection process have been discussed as in the follows. High technology inspection systems are implemented in modern industry and manufacturing process to replace with labour inspection that may cause technical issues or error due to mankind physical constraints [1]. Some inspection typically need visual system which is similar to how human s eye observe [2], but different situation contradict to other type of inspection like X-ray scanning. Meanwhile, X-ray scanning method can be used to detect and observe the internal structure rather than outer look of a product [3]. This paper discusses the design of an algorithm with the use of computer vision technique that makes use of python programming and open source vision library. Background of machine vision This computer vision system also known as the construction of explicit and meaningful description of physical objects from images [4]. Often thought both computer vision and machine vision to be one in the same, but they are both different in terms of technologies. Computer vision refers in broad term to the capture and automation of image analysis, whereas machine vision refers to the use of computer vision to factory automation. In the early century, manufacturing process and plantations often use human eye or labours for inspection of every product until lately both the computer and light sensory device has been introduced to the public. People then started to use computer and sensory device for image analysis and processing to try to imitate human inspection and replace labour with computer vision system. This can avoid human errors and improve the productivity of certain product, as it can be operate in 24 hours per day with the source of electricity. METHODOLOGY Inspection system by using machine vision technique often require vision camera, which is normally a sensor that is used to capture images of product and represent the image as digital image. Besides, it does involve numbers of important factors that can affect the effectiveness of the design. Throughout the progress study, each significant criteria of an inspection process was concerned and was conducted with step by step sequential flow. Hence, the following section illustrates the key steps for development of algorithm design. Image sensor While conducting the experimental setup, CMOS (Complementary Metal Oxide Semiconductor) sensor was used as it has its specific performance as compared to others such as CCD (Charged-Coupled Device). Figure-1 illustrates the architectural design of a CMOS sensor [5-6]. 2737

Figure-2. Bright field lighting technique. This technique make use of point light instead of diffuse lighting due to high intensity of light source are being focused on the object. Besides, bright field lighting technique was used as it is the commonly used technique in performing the product inspection. Various lighting condition are performed for traffic sign recognition application [6-9]. Figure-1. Architectural design of CMOS sensor [5]. With this structural design, the CMOS contributes greater speed than CCD as well as other factors which may indirectly increase the performance in the inspection system design. This sensor has its pros and cons in different criteria if compares with CCD sensor[5]. Illumination and surrounding This step is critical during the inspection process as illumination can direct impact the digital image produced by CMOS sensor. Poor illumination setup tends to cause poor image results in poor effectiveness of the inspection process. The enclosure system is required to block other unnecessary light source and excessive light intensity that is directed onto the product or sample. Excess light intensity can produce very bright spot and probably causes glare effect on the image produced. Therefore, a specific background colour was used while conducting the experiment. Figure-2 shows the illumination or lighting technique used to conduct product inspection. Algorithm design To inspect a certain object, a suitable algorithm is required to determine whether the sample inspected is good unit or defective unit. Once the digital image is captured by the sensor, the image will then undergo image processing which includes object recognition and detection via Python programming language. Besides, a computer vision library for image analysis and processing of an open-source computer vision library (OpenCV) is needed to develop the algorithm for inspection process. The designs of algorithm are based on process flow as shown in Figure-3. 2738

Defective product Before creating the algorithm, a proper understanding about defective product is essential. This is to ensure the algorithm is able to interpret the product detected, whether is a good product or defective one. For most product inspection, usually products with visible scratches on the surface are considered as defective. In most production line, unwanted marks or colour on the surface are also considered as colour defect which the colour of the product are not wanted on the product surface. Another criterion for defective product is shape defect. This kind of defect is defect caused by the shape of the product does not match with the desired one. Thus, the algorithm is designed based on the sample used as shown in Figure-4(a) and (b). Figure-4(a). Good ceramic cup (b). Defective cup. Figure-3. Flow chart of algorithm. Camera and object distance setting for running the experiment The experiment was conducted to determine what is the suitable distance for the camera positioned vertically away from sample. Besides, one of the main concerns regarding this experiment is to understand how the distance of the camera is from object affects the image produced by CMOS camera. Figure-5 shows camera setting from testbed distance. In addition, the algorithm designed has highlights the inspected region of the object with green colour line. In order to compare the product, a sample of good unit image is to be stored in a memory as a reference image with its specific digital image characteristic in term of colour, shape and size so that each time the input image will be used to compare with the reference image. Framework of programming A framework is required as it is a layered structure for writing the programming. SimpleCV framework which is a trademark of Sight Machines was used during the design as it is an open source framework and is known as the wrapper of OpenCV library. Through this framework, it can get to access to open source computer vision library OpenCV easily and execute the programming syntax written by using Python programming language. Meanwhile, this framework imports all the necessary OpenCV library making the programming more understandable through this SimpleCV framework. Thus, this framework allows users to do the algorithm design easily and to construct prototype using the built in algorithm functions of the Simple CV library. Figure-5. Diagram of testbed distance. These experiments were done through varying the distance of camera and measure the distance simultaneously. The distance was measured using the standard measuring tape and its scale unit as in centimetre. Calculation is involved to obtain the distance from camera to object via the total distance from the top of testbed and the measured value. The calculation method and formula are explained as following section. From the diagram, the camera distance then can be obtained by using Equation. (1): 2739

D T X C (1) where D = camera distance, T = total height/ height of testbed, X = measured distance and C = height of sample. Based on the design of set-up, the total height was 50cm and the height of the sample used was measured at 4.5cm. Thus, the distance can then be calculated by substituting the values into the formula. Pixel frequency variation test Since the camera inspects the sample by using live stream or is running at 60 frames per second (fps) when the object are being placed vertically from the camera, the frequency of white pixel for each frame was found differently from one frame another during the designing process. Hence, this test is to obtain the maximum and minimum pixel frequency of the threshold image per frame if it is running at 60 fps. Besides, this test was conducted to observe how the pixel frequency changes with respect to frames captured. To obtain the maximum and minimum value, the script was executed and the data for numbers of frames were collected via python scripts by print out the frequency of pixel for each frame through python executive user interface window that displays the outputs. The tolerance rate can be calculated using Equation. (2). Figure-6. Output displayed from offline system. Image captured The image captured by CMOS camera is processed through threshold method to detect the region of black pixel on the product surface representing the mark. Meanwhile, the histogram representing the image before and after thresholding is presented as follows. Tolerance Rate maximum pixel minimum pixel value x 100% maximum pixel value (2) Mark detection test This experiment was conducted to test whether the designed algorithm is able to detect the unwanted sign or mark that forms on the product surface. To perform the experiment, a mark was drawn onto the surface of product by using white board marker pen which is non-permanent marker. Then, the mark size was measured using the standard measuring tape and then place the sample into the testbed and run the code or python scripts. The size of the mark was then varied and then measured repeatedly throughout the test or experiment. The mark size was then gradually reduced to determine the minimum mark size that can be detected by the designed algorithm. Figure-7. Image of ceramic cup with mark processing. The histogram of images of ceramic cup with mark before and after thresholding are plotted by using matplotlib of python as follows. RESULTS AND DISCUSSIONS Throughout the experiment conducted the result and the data for each attempted was recorded. The following sections are the result obtained through the experiment as mentioned above. The results are then classified according to type of testing conducted during the study of the research when the object is being placed vertically. Detection Once the object is located vertically under the camera, the algorithms detect and add drawing layer indicates the region of sample that is being inspected. Figure-6shows the image detected by algorithm [10-11], and output being displayed though user interfaces. Figure-8(a). Histogram of unprocessed image. 2740

Figure-9. Graph of pixel frequency versus distance. Figure-8(b). Histogram of image after thresholding. From the histogram, the x-axis represents the level of brightness which is similar to grayscale. But instead of using 0-255, level of brightness use only 0-50 due to greater graphical smoothing effects. Based on the line drawn in Figure-8(a), the each line drawn representing the pixel value and frequency for each frame captured by CMOS camera. Pixel frequency variation result This result was obtained through the recording of pixel frequency counted by Python script and printed through user interface. Figure-10 shows the frequency obtained for first 15 frames. Camera distance test result This is the result when the camera position test was conducted by varying the distance between the camera and the object. Meanwhile, the pixel frequency was recorded to observe how it changes with respect to the distance of camera. The distance was measured and the image quality was recorded as in Table-1 and Figure-9 show ceramic pixel versus distance. Table-1. Distance test result. Distance, D (cm) Image Quality White Pixel Frequency 11.5 Not clear 37386 10.5 Not clear 46197 9.5 Moderate 61692 8.5 Sharp 70431 7.5 Sharp 95758 6.5 Not clear 99096 5.5 Not clear 107326 Figure-10. Graph of pixel frequency versus frame. From the data collected, the maximum frequency value obtained was 95893 whereas the minimum was 95690. Based on these two values, the tolerance rate of 0.212% was obtained through Equation. (2). Mark size test result The mark size that drawn on product surface was varied to test the algorithm. Table-2 shows the result obtained when the mark size was reduced for each attempt and the output displayed being displayed by the algorithm. If the output was true, it means the algorithm able to detect the mark. Whereas, false means the algorithm fails to detect the mark. 2741

Mark Size(cm) Table-2. Mark size result. Output 0.50 True 0.45 True 0.40 True 0.35 True 0.30 True 0.25 True 0.20 True 0.15 False 0.10 False According to the testing, the size was reduced by approximately 0.05cm per test attempt. The algorithm was then executed to test the detection of the mark size. CONCLUSIONS Based on the designs, the distance for camera positioning is best at 8.5cm away from the surface of a product with the use of 2megapixels CMOS camera. It was found that the tolerance rate was 0.212%, any pixel differ from reference image that has lower than 0.212% will be considered as good unit. Besides, it is able to detect marks on surface of product that is at least 0.20cm or above for effective performance. ACKNOWLEDGEMENT The authors would like to thank to the Robotics and Industrial Automation (RIA) research group under Center of Robotics and Industrial Automation (CeRIA) and Centre for Research and Innovation Management (CRIM) of Universiti Teknikal Malaysia for providing tools and fund to complete this project. REFERENCES [1] Jiyan Z., Mingliang G., Yan C. and Taogeng Z. 2013. New method to generate observation target for contrast sensitivity and color vision inspection of the human eye. In: IEEE 11 th International Conference on Electronic Measurement and Instruments. pp. 905-909. [4] Stone J. V. 1993. Computer vision: What is the object? In: Prospects for Artificial Intelligence: Proceedings of AISB '93, 29 March-2 April 1993, Birmingham, U. K. A. Sloman and A. Ramsay (Eds.). IOS Press, Amsterdam, Netherlands. pp. 199-208. [5] D. Litwiller. 2001. CCD vs. CMOS. Photonics Spectra. 35(1): 154-158. [6] N. M. Ali, M. S. Karis, A. F. Z. Abidin, B. Bakri and N. R. A. Razif. 2015. Traffic sign detection and recognition: Review and analysis. Jurnal Teknologi 77(20): 107-113. [7] N. M. Ali, Y. M. Mustafah and N. K. A. M. Rashid. 2013. Performance analysis of robust road sign identification. IOP Conference Series: Materials Science and Engineering. 53(1): 1-6. [8] Ali N. M., Karis M. S. and Safei J. 2014. Hidden nodes of neural network: useful application in traffic sign recognition. In: IEEE Conference on Smart Instrumentation, Measurement and Applications. pp. 1-4. [9] N. M. Ali, N. K. A. Md Rashid and Y. M. Mustafah. 2013. Performance comparison between RGB and HSV color segmentations for road sign detection. Applied Mechanics and Materials. 393: 550-555. [10] Ali N. M., Jun S. W., Karis M. S., Ghazaly M. M. and Aras M. S. M. 2016. Object classification and recognition using bag-of-words (BOW) model. In: IEEE 12 th International Colloquium on Signal Processing and Its Applications. pp. 216-220. [11] Karis M. S., Razif N. R. A., Ali N. M., Rosli MA., Aras M. S. M. and Ghazaly M. M. 2016. Local binary pattern (LBP) with application to variant object detection: A survey and method. In: IEEE 12thInternational Colloquium on Signal Processing and Its Applications. pp. 221-226. [2] T. Brosnan and D. W. Sun. 2004. Improving quality inspection of food products by computer vision-a review. Journal of Food Engineering. 61(1): 3-16. [3] Casasent D. P., Talukder A., Cox W., Chang H. T. and Weber D. 1996. Detection and segmentation of multiple touching product inspection items. In: SPIE Optics in Agriculture, Forestry, and Biological Processing II. pp. 205-216. 2742