Coded Aperture for Projector and Camera for Robust 3D measurement

Similar documents
Coded Computational Photography!

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring

Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing

Active one-shot scan for wide depth range using a light field projector based on coded aperture


Deblurring. Basics, Problem definition and variants

Simulated Programmable Apertures with Lytro

Deconvolution , , Computational Photography Fall 2017, Lecture 17

A Review over Different Blur Detection Techniques in Image Processing

Coded photography , , Computational Photography Fall 2018, Lecture 14

Coded photography , , Computational Photography Fall 2017, Lecture 18

Changyin Zhou. Ph.D, Computer Science, Columbia University Oct 2012

Computational Cameras. Rahul Raguram COMP

Coded Aperture Pairs for Depth from Defocus

Deconvolution , , Computational Photography Fall 2018, Lecture 12

Coding and Modulation in Cameras

Computational Camera & Photography: Coded Imaging

Defocus Map Estimation from a Single Image

To Do. Advanced Computer Graphics. Outline. Computational Imaging. How do we see the world? Pinhole camera

On the Recovery of Depth from a Single Defocused Image

Computational Approaches to Cameras

Coded Aperture and Coded Exposure Photography

Extended depth of field for visual measurement systems with depth-invariant magnification

Near-Invariant Blur for Depth and 2D Motion via Time-Varying Light Field Analysis

When Does Computational Imaging Improve Performance?

Motion-invariant Coding Using a Programmable Aperture Camera

The ultimate camera. Computational Photography. Creating the ultimate camera. The ultimate camera. What does it do?

multiframe visual-inertial blur estimation and removal for unmodified smartphones

What are Good Apertures for Defocus Deblurring?

Optimal Single Image Capture for Motion Deblurring

Computational Photography Introduction

Modeling and Synthesis of Aperture Effects in Cameras

Recent Advances in Image Deblurring. Seungyong Lee (Collaboration w/ Sunghyun Cho)

Improved motion invariant imaging with time varying shutter functions

A Framework for Analysis of Computational Imaging Systems

Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems

Admin Deblurring & Deconvolution Different types of blur

Analysis of Coded Apertures for Defocus Deblurring of HDR Images

SUPER RESOLUTION INTRODUCTION

Toward Non-stationary Blind Image Deblurring: Models and Techniques

Selection of Temporally Dithered Codes for Increasing Virtual Depth of Field in Structured Light Systems

Pattern Recognition 44 (2011) Contents lists available at ScienceDirect. Pattern Recognition. journal homepage:

Image Deblurring with Blurred/Noisy Image Pairs

Restoration of Motion Blurred Document Images

Time of Flight Capture

Coded Exposure Deblurring: Optimized Codes for PSF Estimation and Invertibility

Focal Sweep Videography with Deformable Optics

Performance Evaluation of Different Depth From Defocus (DFD) Techniques

A Mathematical model for the determination of distance of an object in a 2D image

Image Deblurring. This chapter describes how to deblur an image using the toolbox deblurring functions.

Extended Depth of Field Catadioptric Imaging Using Focal Sweep

Burst Photography! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 7! Gordon Wetzstein! Stanford University!

A Novel Image Deblurring Method to Improve Iris Recognition Accuracy

Defocus Blur Correcting Projector-Camera System

To Denoise or Deblur: Parameter Optimization for Imaging Systems

4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES

Synthetic aperture photography and illumination using arrays of cameras and projectors

Lecture 19: Depth Cameras. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

Edge Width Estimation for Defocus Map from a Single Image

Coded Aperture Flow. Anita Sellent and Paolo Favaro

Computational Photography: Principles and Practice

Image and Depth from a Single Defocused Image Using Coded Aperture Photography

Focused Image Recovery from Two Defocused

CS6670: Computer Vision

fast blur removal for wearable QR code scanners

A Recognition of License Plate Images from Fast Moving Vehicles Using Blur Kernel Estimation

NTU CSIE. Advisor: Wu Ja Ling, Ph.D.

Cameras. Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017

Explanation of Aberration and Wavefront

Image Deblurring and Noise Reduction in Python TJHSST Senior Research Project Computer Systems Lab

La photographie numérique. Frank NIELSEN Lundi 7 Juin 2010

Extended depth-of-field in Integral Imaging by depth-dependent deconvolution

6.A44 Computational Photography

Resolving Objects at Higher Resolution from a Single Motion-blurred Image

Removing Temporal Stationary Blur in Route Panoramas

Computational Photography

Single Digital Image Multi-focusing Using Point to Point Blur Model Based Depth Estimation

Perceptually-Optimized Coded Apertures for Defocus Deblurring

SURVEILLANCE SYSTEMS WITH AUTOMATIC RESTORATION OF LINEAR MOTION AND OUT-OF-FOCUS BLURRED IMAGES. Received August 2008; accepted October 2008

Today. Defocus. Deconvolution / inverse filters. MIT 2.71/2.710 Optics 12/12/05 wk15-a-1

DOING PHYSICS WITH MATLAB COMPUTATIONAL OPTICS. GUI Simulation Diffraction: Focused Beams and Resolution for a lens system

Transfer Efficiency and Depth Invariance in Computational Cameras

Point Spread Function Engineering for Scene Recovery. Changyin Zhou

Computer Vision. The Pinhole Camera Model

ECEN 4606, UNDERGRADUATE OPTICS LAB

Blind Single-Image Super Resolution Reconstruction with Defocus Blur

What the spider s eyes don t tell the spider s brain

Modeling the calibration pipeline of the Lytro camera for high quality light-field image reconstruction

Midterm Examination CS 534: Computational Photography

Computational Challenges for Long Range Imaging

Depth Estimation Algorithm for Color Coded Aperture Camera

The Camera : Computational Photography Alexei Efros, CMU, Fall 2005

Implementation of Image Deblurring Techniques in Java

Lenses, exposure, and (de)focus

A No Reference Image Blur Detection using CPBD Metric and Deblurring of Gaussian Blurred Images using Lucy-Richardson Algorithm

Motion Deblurring using Coded Exposure for a Wheeled Mobile Robot Kibaek Park, Seunghak Shin, Hae-Gon Jeon, Joon-Young Lee and In So Kweon

2D Barcode Localization and Motion Deblurring Using a Flutter Shutter Camera

International Journal of Advancedd Research in Biology, Ecology, Science and Technology (IJARBEST)

Capturing Light. The Light Field. Grayscale Snapshot 12/1/16. P(q, f)

DEPTH FUSED FROM INTENSITY RANGE AND BLUR ESTIMATION FOR LIGHT-FIELD CAMERAS. Yatong Xu, Xin Jin and Qionghai Dai

Transcription:

Coded Aperture for Projector and Camera for Robust 3D measurement Yuuki Horita Yuuki Matugano Hiroki Morinaga Hiroshi Kawasaki Satoshi Ono Makoto Kimura Yasuo Takane Abstract General active 3D measurement system using structured light is based on triangulation, which requires correspondence between projection pattern and camera observed pattern. Since both the projected pattern and the camera image should be in focus on the target, the condition makes a severe limitation on depth range of 3D measurement. In this paper, we propose a technique using coded aperture (CA) for projector and camera system to relax the limitation. In our method, Depth from Defocus (DfD) technique is used to resolve the defocus of projected pattern. By allowing blurry pattern of projection, measurement range is extended compared to common structured light methods. Further, overlapped blur pattern can also be resolved with our technique. 1 Introduction Active 3D measurement systems based on structured light, first, retrieve correspondences between projected patterns and observed patterns, and then, 3D information is recovered by triangulation. To retrieve the correspondence accurately, the patterns should be captured clearly by the camera. Thus, both the camera and the pattern projector should be in focus on the target; we need a severe condition for the setting of them. Since depth of field (DOF) of projector is usually narrower than that of camera because of a limitation on power of light source, the DOF of projector usually limits the range of 3D measurement. One essential solution for the problem is to use special light source which emits straight beam without blur such as laser. However, making a dense and complicated 2D pattern with laser is not easy and using strong laser has safety issue also. In this paper, we propose a new structured light based 3D reconstruction technique in which strong blur effects are allowed. To devise the blur efficiently with structured light, we use coded aperture on the light source and camera with DfD technique. Since the technique actively uses blur effect, projector s narrow DOF could be advantageous and the measurement accuracy Department of Information Science and Biomedical Engineering, Faculty of Engineering, Kagoshima University Samsung Yokohama Research Institute Co.,Ltd. can be improved under blurry condition. Main contributions of the paper are as follows. 1. Measurement accuracy on blurry pattern can be improved by using CA in projector and camera. 2. Based on deconvolution technique, overlapped pattern can be used for reconstruction. 3. Projector and camera w/wo CA is evaluated. 2 Related work Currently, active 3D measurement devices are widely available [5, 6]. They are usually based on triangulation using structured light because it has practical advantages in the accuracy and cost effectiveness. To conduct triangulation, accurate and dense correspondences are required [1] and all of these methods assume that the optics of both the pattern projector and the camera are well focused on the target surface. This makes actual measurement range severely limited. One of the solution for the problem is to use a focus-free pattern projection (i.e., laser beam [6]). Our proposed method is taking another approach using defocused pattern with common light source. The techniques of DfD is well known for camera system [8], but not for projector system. Moreno-Noguer et al. [7] proposed DfD using pattern projector s defocus not camera s defocus. They used a grid of dots, so that each observed dot s defocus can reflect its own depth information. Since goal of the paper [7] was not 3D measurement, but for image refocusing, the projection dot were sparse. Instead, since our purpose is to measure the depth, dense pattern is required. In such case, patterns are inevitably overlapped each other when blur becomes large, and thus, solution is required. Recently, CA theory and techniques are researched in the field of Computational Photography[9, 4]In the technique, the non-circular aperture is used, many special post-processes can be realized, e.g., motion deblurring [10], all-focus image [9], DfD [4], etc. In contrast, there are few researches about CA in projector. Grosse et al. proposed a data projection system including programmable CA[3]. They made use of CA theory for expanding projector s physical DOF, but not for 3D measurement.

Image plane dp da Camera LED Light shape of pattern differs according to system noise, we actually tested several patterns to find best parameter for the system. With our system, we used σ =0.001 and actual pattern is shown in Fig.4 (a). Screen Aperture x xf xp xa Lens f O Figure 1. Optical system Figure 2. Designing with a half mirror Figure 3. Actual optical system 3 System configuration The proposed system consists of lens, LED, CCD and CA as shown in Fig.1. Since the proposed technique is based on DfD, it is ideal if the system is designed with a half mirror as shown in Fig2 to capture images without distortion. However, since construction of actual system with a half mirror is not easy and light intensity is severely decreased i.e.half or less, we take another option. In our technique, we installed a projector s lens and a camera as close as possible; such configuration is allowed because baseline is not required with our system. In terms of light source, we use an array of LEDs for prototype. Because each LED is independently arranged, resolution is low, however, it is just for evaluation purpose and it is not an actual limitation of the system. We can either put a CA in a video projector, micro-array-lens or diffractive optical element (DOE), if we need high resolution. In terms of design of CA, we used a pattern generated by genetic algorithms for DfD [11]. Since the 4 Depth from defocus of projected pattern With the proposed technique, shape is reconstructed by DfD using defocus blur of reflected light pattern which is projected from a projector with CA. The technique mainly consists of two steps: the first is a calibration step which carries out estimation of parameter of blur effects for each depth and the second is a shape reconstruction step which estimates the depth of the reflected pattern on the object. Note that the former calibration process is only required once for the system. In addition, it is assumed that the intrinsic parameter of a camera is calibrated by known algorithm e.g., opencv [2]. 4.1 Calibration of defocus of light source For usual structured light based 3D measurement system, it is assumed that the reflected pattern is sharp enough with little blur. On the other hand, since strong blur is expected with our technique, the parameters which represent defocus effects should be calibrated i.e., the parameters to describe point spread function (PSF). Although the shape of the PSF varies dependent on both depth (scaling) and noise, main factor is a scaling. By using the extrinsic camera and projector parameters, scaling can be calculated and PSF for specific depth can be created from shape of CA. However, in our case, PSF is a convolution of two CAs of projector and camera, and thus, it is difficult to make accurate PSF for specific depth using only the extrinsic parameters. Based on the above facts, instead of creating the PSF with the extrinsic parameters, we capture actual blur pattern for several depths to estimate the scaling parameters to create PSFs. With the approach, although a calibration process becomes complicated, more accurate blur effect can be obtained. Further, extrinsic calibration of camera and projector is not required. For actual calibration, first, the blur pattern using CA is projected on a flat board and several images are captured by changing the depth of the board. Since we use LEDs as light source which can be approximated as point light source, the projected pattern can be considered as a PSF itself (Fig.4). To estimate the scaling parameters for PSF from captured images, we actually apply deconvolution to the captured images, changing the scaling parameter of PSF to search the best scaling parameter where deconvolved image is most similar to the original one. Since it is

(a) Coded (b) depth (c) depth (d) depth (e) depth Aperture 250mm 280mm 300mm 330mm Figure 4. Projected and captured CA. Note that all the captured patterns are blurry because CA is set in both projector and camera. a 2D search and there are two scaling parameters for camera and projector, we conduct full-search to retrieve solution. In terms of deconvolution algorithm, we use Wiener filter deconvolution technique [9]. 4.2 Depth estimation by deconvolution In the proposed method, a pattern is projected onto the target object so that the depth can be estimated from the observed defocus. Fig.5 shows the algorithm. Since we measure the defocusing parameters (scaling parameters of camera and projector in our method) by sampled known depths, we conduct deconvolution using all the PSFs for all depth. Then, we consider the filter in which the deconvolution result is most close to the original projection pattern represents the correct depth. If defocus is strong or dot density is high, the defocused dots will overlap. However, by using the correct filter, the overlap of the pattern can be canceled by the deconvolution, and thus, we can get the depth of individual dots without interference of the overlap. This is one of the strength of our method. For the estimation of the best deconvolution filter, simple solution is to calculate the similarity between projection pattern and deconvolution result. For similarity measurement, sum of squared distance (SSD) can be used. However, actual deconvolution result usually has error and similarity calculation with SSD becomes sometimes unstable. Therefore, we also try another method to find the best filter. Since we use LED for light source, which has extremely strong intensity in small 1.Project the pattern on a target object and capture it area, doconvolved image should have a single strong peak when the best filter is applied. Based on it, we use the following equation to decide the depth. d =argmaxp(d i (I)) (1) i where p represents a function to calculate peak value in image and D i represents deconvolution with filter of depth i. If light source can be considered to be a point light source, we take this method. Since, we have the calibrated filters at the coarsely sampled depths, we have to estimate the sub-sampling parameters to acquire the fine depth values. In the paper, we linearly interpolate PSF. 5 Experiments We constructed an actual system and conducted experiments to show the effectiveness of the method. The actual system is shown in Fig6. We used an achromatic lens with 150mm focal length and 50mm diameter. CCD camera is resolution 1280 960 and red LED array of 660nm arranged 18 12 resolution was used. Size of CA was 35mm 35mm and a distance between the lens and the light source was 300mm. In the system, since we only used a single lens, large distortion appeared at the peripheral region of reflected pattern. Therefore, we only used a center of the pattern where strong distortion was not observed. Such distortion can be eliminated by certain optical design and it is our important future work. We calibrated PSF with 10mm intervals and estimate the depth with 1mm order. 5.1 Calibration of defocus We calibrated PSF, and estimated scaling parameter with changing depth. We projected blur pattern using CA on a flat board and captured with 10mm interval changing the depth in the 250 to 350 mm range. A part of results is shown in Fig.4. Fig.?? show the example of 2D search and the estimated scaling parameters from these images are shown in Fig.7. We can find that smooth curves are acquired by our calibration technique. 2. Deconvole the image using the filter of specific depth 3. Calculate the fitness between the deconvolved image and original pattern Last depth? No Yes 4. Find the best fit depth Figure 5. Reconstruction algorithm Coded Aperture in Camera Figure 6. Equipment and measurement scene.

(a) 2D search space to find scaling parameters. (b) Estimated scaling parameters. Figure 7. Calibration results of PSF. Table 1. Restored images using PSFs of each distance. 1) CA only in projector setting. input filter depth 250mm 290mm 350mm depth 250mm depth 290mm depth 350mm Deconvolution results 3) CA in both projector and camera setting. input filter depth 250mm 290mm 350mm depth 250mm (a) (b) (c) (d) Figure 8. Flat board capturing configuration and captured images. depth 290mm 5.2 Plane estimation for evaluation Fig.8 shows the relationship between the device and the board. The light irradiated from LED array is reflected on the board and reflected patterns are observed on the target object. Then, the patterns are captured by the camera and shapes are reconstructed. We conducted experiments with three different settings, such as 1) CA only in projector, 2) CA only in camera and 3) CA in both projector and camera. Table 1 shows the captured patterns (left column) and deconvolved images by calibrated parameter with each depth filter of setting 1) and 3). We can confirm that deconvolved images with correct depth filter restored a sharp pattern. Using those restored images, we can estimate the depth. Results are shown in Fig.9. From the figure, we can confirm that the shapes are correctly restored with our technique. Note that, even with largely blurred images which make a large overlapping area between the patterns (e.g., depth 25 or 35), shapes are correctly reconstructed; with such blurry patterns, shapes cannot be restored by conventional structured light method. depth 350mm Deconvolution results Fig.10 shows the average and standard deviation of the depth. From the figures, we can see that the result of setting 1) is the best with average error. However, the standard deviation of 1) increases when the depth is in focus; this is because without blur, ambiguity increases with DfD and setting 3) is better than 1) near in-focus zone. We consider that a camera blur helps to decrease the ambiguity in a complementary manner. 5.3 Arbitrary shape estimation Next, we estimated a depth of more general objects. We measured the white statue as shown in Fig.11(a). Center of the statue is placed at 270mm apart from the lens. Fig.11(b) shows the reflected pattern and (c) and (d) show the reconstruction results. We can see that although blur patterns are overlapped each other, the shape is robustly restored with our technique. At the same time, we can observe unstable reconstruction at

(a) CA only in Projector (b) CA only in Camera (c) CA in both Projector and Camera Figure 9. Restoration of flat board results (a)target object (b)captured image (a) Average error (b) Standard deviation Figure 10. Statistical results. In-focus depth of projector is 290mm and 180mm for camera. (c)reconstructed result with pseudo-color some parts of the object. We consider this is because the camera was placed a bit distant from the lens of projector and patterns are distorted especially at slanted parts. Such effects are expected to be resolved by using a halfmirror and it is our future work. 6 Conclusion In this paper, we propose a structured light based 3D measurement system using CA. By using our system, blur effects are efficiently resolved with DfD on both camera and projector. In the experiment, we verified that the shape can be recovered with high accuracy by our method and showed that the curved surfaces are successfully reconstructed. In the future, using a halfmirror to avoid distortions is considered. 7 Acknowledgment This work was supported in part by SCOPE #101710002 and NeXT program LR030 in Japan. References [1] J. Batlle, E. M. Mouaddib, and J. Salvi. Recent progress in coded structured light as a technique to solve the cor- (d)shape with texture Al- Figure 11. Reconstruction result. though small errors exist, shapes are robustly reconstructed under overlapped patterns. respondence problem: a survey. Pattern Recognition, 31(7):963 982, 1998. [2] G. Bradski and A. Kaehler. Learning OpenCV: Computer Vision with the OpenCV Library. O Reilly Media, Oct. 2008. [3] M. Grosse and O. Bimber. Coded aperture projection. In SIGGRAPH, 2008. [4] A. Levin, R. Fergus, F. Durand, and W. T. Freeman. Image and depth from a conventional camera with a coded aperture. In SIGGRAPH, 2007. [5] Mesa Imaging AG. SwissRanger SR-4000, 2011. http://www.swissranger.ch/index.php. [6] Microsoft. Xbox 360 Kinect, 2010. http://www.xbox.com/en-us/kinect. [7] F. Moreno-Noguer, P. N. Belhumeur, and S. K. Nayar. Active Refocusing of Images and Videos. ACM Trans. on Graphics (also Proc. of ACM SIGGRAPH), Aug 2007. [8] S. Murali and G. Natarajan. Depth recovery from blurred edges,. In CVPR, 1988. [9] H. Nagahara, S. Kuthirummal, C. Zhou,, and S. K. Nayar. Flexible depth of field photography. In ECCV, 2008. [10] R. Raskar, A. Agrawal, and J. Tumblin. Coded exposure photography: Motion deblurring using fluttered shutter. In SIGGRAPH, 2006. [11] C. Zhou and S. K. Nayar. What are Good Apertures for Defocus Deblurring? In IEEE International Conference on Computational Photography, Apr 2009.