CVPR Easter School. Michael S. Brown. School of Computing National University of Singapore

Size: px
Start display at page:

Download "CVPR Easter School. Michael S. Brown. School of Computing National University of Singapore"

Transcription

1 Computational Photography CVPR Easter School March th, 2011, ANU Kioloa Coastal Campus Michael S. Brown School of Computing National University of Singapore

2 Goal of this tutorial Introduce you to the wonderful world of Computational Photography A hot topic in Computer Vision (and Computer Graphics) Plan: to give you an overview of several areas that computational photography is making inroads Computational Optics Computational Illumination Computational Processing (and other cool stuff) Attempt to do this in ~3.5 hours. This is traditionally a full semester course I ve had to perform academic triage to size down materials, so my notes are by no means exhaustive 2

3 Part 1: Preliminaries Motivation and Preliminaries Outline Refresher on image processing and basics of photography p Part 2:Computational Optics Catadioptric cameras, flutter shutter camera, coded aperture, extended depth of field camera, hybrid cameras BREAK Part 3: Computational Illumination Radiometric calibration, Flash/No-Flash Low-Light Imaging, Multi-flash (gradient camera), Dual Photography Part 4: Computational Processing (and other cool stuff) User-assisted segmentation, Poisson Image Editing, Seam-carving carving, Image Colorization Added bonuses: we will touch on Markov Random Fields, bi-lateral filter, the Poisson equations, and various other useful tools 3

4 Part 1 Motivation and Preliminaries 4

5 Modern photography pipeline Scene Radiance Pre-Camera Lens Filter Lens Shutter Aperture In-Camera CCD response (RAW) CCD Demosaicing Gamut Mapping Preferred color selection Tone-mapping Starting point: reality (in radiance) Final output Camera Output: srgb Ending gpoint: better than reality (in RGB) Post-Processing Touch-up Hist equalization Spatial warping Etc... Even if we stopped here, the original CCD response potentially has had many levels of processing. 5

6 Inconvenient or convenient truth? Modern photography is about obtaining perceptually optimal images Digital photography makes this more possible than ever before Images are made to be processed 6

7 Let s be pragmatic PhD in Philosophy? No? then forget the moral dilemma We are scientist and engineers What cool things we can do? Is it publishable? Nerd (you) Bill Freeman, Spacetime photography Paper needed for graduation, promotion, travel to Hawaii Daniel Reetz/Matti Kariloma (futurepicture.org) 7

8 Images are made to be processed Fact: Camera images are going to be manipulated. Opportunity: This gives us the freedom to do things with the knowledge that we will process them later Levin et al Coded Aperture Nayar s Catadioptric Imaging Raskar s Multiflash Camera Tai s Hybrid Camera 8

9 Well, first So, what should we do? Do the obvious* Address limitations of conventional cameras Requires creativity it and engineering i Second,.. Think outside the box Go beyond conventional cameras Requires creativity, possibly beer... *note: obvious does not imply easiest 9

10 Examples of conventional photography limitations Image Blur/Camera Shake Limited Depth of Field No or bad coloring Limited Dynamic Range Limited Resolution Sensor noise Slide idea from Alyosha Efros 10

11 Beyond conventional photography Non-photorealistic camera Adjustable Illumination Camera View Illumination View Static View Beyond Static Space wiggle - Gasparini Inverse Light Transport 11

12 Where can we make changes? Shree Nayar s (U. Columbia) view on Computational Cameras... 12

13 What else can we do? Put the human in the loop when editing photographs. Similar philosophy to Anton van den Hengel s interactive image-based modeling. 13

14 Excited? Ready to go!!! We will need some preliminaries first. 14

15 Frequency Domain Processing For some problems it is easy to think about manipulating an image s frequency directly It is possible to decompose an image into its frequencies We can manipulate the frequencies (that is, filter them) and then reconstruction the resulting image This is Frequency Domain processing This is a very powerful approach for global processing of an image There is also a direct relationship with this and spatial domain filtering 15

16 Discrete Fourier Transform The heart for Frequency domain processing is the Fourier Transform. 2D Discrete Forward Fourier Transform (to frequency domain) F 1 MN M 1N 1 j 2π ux N vy M u, v f x, y e y 0 x 0 2D Discrete Inverse Fourier Transform (back to pixel/spatial domain) f M 1N 1 j 2π ux N vy M x, y F u, v e v 0 u 0 16

17 2D Discrete Fourier Transform Converts an image into a set of 2D sinusoidal patterns The DFT returns a set of complex coefficients that control both amplitude and phase of the basis Some examples of sinusoidal basis patterns 17

18 Simple way of thinking of FT The FT computes a set of complex coefficients F(u,v) = a + ib. Each (u,v) controls a corresponding unique frequency base. For example: F(0,0) is mean of the signal (also called the DC component F(0,1) Is the first vertical base F(1,0) is the first horizontal base F(1,1) is the first mixed base 18

19 Complex coefficient The complex coefficient controls the phase and magnitude: F(u,v)*e istuff = F(u,v) x (cos(stuff) + isin(stuff)) = (a + bi) x (cos(stuff) + isin(stuff)) Basis (0,1) with different F(0,1) coefficients The frequency of the sinusoid doesn t change, only the phase and the amplitude. 1 MN M 1N 1 j2π 0x N 1 y M F F 01 0,1 f x, y e y 0 x 0 Basis 0,1 The coefficient corresponding to this basis is computed based on the contribution of all image pixels f(x,y). 19

20 Inversing from Frequency Domain v Inversing is just a matter of summing up the basis weighted by their F(u,v) contribution. u = Result after summing only the first 16 basis* basis basis basis basis basis original * f 4 j 2 π ux N vy M x, y F u, v e v 0 4 u 0

21 Filtering We can filter the DFT coefficients. This means we throw away or suppress some of the basis in the frequency domain. The filtered image is obtained by the inverse DFT. We often refer to the various filters based on the type of information they allow to pass through: Lowpass filter Low-order basis coefficients are kept Highpass filter High-order basis coefficients are kept Bandpass filter Selected bands of coefficients are kept Also can be considered band reject Etc.. 21

22 Filtering and Image Content Consider image noise: Original Noise Does noise contribute more to high or low frequencies? 22

23 Typical Filtering Approach From: Digital Image Processing, Gonzalez and Woods. 23

24 Example F(u,v) H(u,v) G = H(u,v) F(u,v) F = fft2(i); H = yourownfilter.m G = H *. F; Note that G(u,v) = H(u,v) F(u,v) is not matrix multiplication. It is a element-wise multiple. f(x,y) I = imload( saturn.tif ); g = ifft2(g); g(x,y) * Examples here have shifted the F,H, and G matrices for visualization. F,G log-magnitude are shown.

25 Equivalence in Spatial Domain,,,, f x y hxy FuvHuv Recall convolution theorem (In spatial domain we call h a point spread function) (In frequency domain we often call H a optical transfer function) Spatial Filtering Frequency Domain Filtering,,, Guv, FuvHuv,, g x, y Guv, g x y f x y hxy The frequency domain filter H, should be inversed to obtain h(x,y): h( x, y) 1 { H( u, v)} 1 25

26 Ideal Lowpass Filter From: Digital Image Processing, Gonzalez and Woods 26

27 Example from DIP Book From: Digital Image Processing, Gonzalez and Woods. 27

28 Original Do=5 Do=15 Do=30 Do is the filter radius cut-off. That is, all basis outside Do are thrown away. Note the ringing artifacts in Do=15,30. Do=80 Do=230 28

29 Why ringing? Ringing This is best demonstrated when looking at the inverse of the ideal filter back in the spatial domain. H(u,v) h(x,y) = F -1 (H(u,v)) Imagine the effect of performing spatial convolution with this filter. Probably look like ringing... 29

30 Making smoother filters The sharp cut-off of the ideal filter results in a sinc function in the spatial domain which leads to ringing in spatial convolution. Instead, we prefer to use smoother filters that have better properties. Some common ones are: Butterworth and Gaussian 30

31 Butterworth Lowpass Filter (BLPF) This filter does not have a sharp discontinuity Instead it has a smooth transition A Butterworth filter of order n and cutoff frequency locus at a distance D 0 has the form (, ) 1 1 [ D ( u, v ) / D0] H u v 2n where D(u,v) is the distance from the center of the frequency plane. 31

32 1. The BLPF transfer function does not have a sharp discontinuity that sets up a clear cutoff between passed and filtered frequencies. 2. No ringing artifact visible when n = 1. Very little artifact appears when n <= 2 (hardly visible). 3. Ringing does start to appear when n gets larger. From: Digital Image Processing, Gonzalez and Woods. 32

33 BLPF Profile Butterworth low-pass filters in spatial domain of order 1, 2, 5, and 20. Note ringing increasing with filters order. Also, from previous slide we see the filter begins to approach the ideal filter as the order increases. From: Digital Image Processing, Gonzalez and Woods. 33

34 Butterworth Example Filters are as follows: n=2, radii = 5, 15, 30, 80, 230. Note no (or little) visible ringing artifacts with n=2. 34

35 Gaussian Lowpass Filter The Gaussian lowpass filter in the frequency domain is expressed as: * u 2 v H( u, v) e where sigma is the variance and used to control the cut-off. The inverse DFT for this function is: h( x, y) 2 e 2 ( x Note that t the GLPF has a Gaussian form in both the frequency and spatial domain. Variance in the frequency domain is inversely proportional to variance in spatial domain y 2 ) *The u,v in the equation are assumed to be centered of the original of the FT.

36 Gaussian Lowpass Filter (GLPF) Here the Gaussian is expressed in a slightly different form, more similar to the BLPF. Note the D 0 is equal to the variance. Huv (, ) e 2, /2 0 2 D uv D

37 GLPF applied Variance set to 5, 15, 30, 80, and 230. No ringing artifacts are present. 37

38 Image Restoration Image restoration attempts to restore images that have been degraded Identify the degradation process and attempt to reverse it Often distinguished from enhancement, because it is more objective in its goal Corrupted Image Restored Image 38

39 Degradation Model + ), ( y x f ) h( Degradation function ), ( y x g ), ( y x h Noise ), ( y x Ideal Image Degraded Image ), ( y ) ( ) ( ) ( ) ( y x y x f y x h y x g Spatial and Frequency Domain Description ), ( ), ( ), ( ), ( v u N v u F v u H v u G ), ( ), ( ), ( ), ( y x y x f y x h y x g 39

40 Image Noise The sources of noise in digital images arise during image acquisition (digitization) and transmission Imaging sensors can be affected by ambient conditions Interference can be added to an image during transmission 40

41 Degradation Degradation function h, and H We think of the degradation function as a convolution. 41

42 Degradation Models Image degradation d can occur for many reasons, some typical degradation d models are: 1 ai bj 0 hi (, j) 0 otherwise hi (, j) Ke 2 2 i j 2 1 L L i, j 2 hi (, j) L otherwise 1 hi (, j) R 0 2 i j R otherwise Motion Blur: due to camera panning or subject moving quickly. [a b] is direction of the motion. Atmospheric Blur: long exposure Uniform 2D Blur Out-of-Focus Blur 42

43 Restoration Via Deconvolution Image f Goal: fin image fˆ Observed image g g( x, y) h( x, y) f ( x, y) ( x, y) Observation (input) is image g. Our goal is to Observation (input) is image g. Our goal is to recovery the original image f.

44 Inverse Filtering Assume we know or can estimate the filter h or H G u v G u, v F u, v H u, v ˆ, F u, v H u, v Inverse filter -1-1 F {F F u, v } F { F u, v H u, v } F u, v H u, v } F -1 H ( u, v)

45 Inverse Filter in Practice u, v F u, v H u v Fˆ u, v G, Problems with Inverse Filtering G u, v H u, v H often has zeros (typically at the high-frequencies)! This makes division ill-posed We can often just ignore 0 values and focus on the low-frequencies where H is defined Uncertainties of H have a significant impact on 45

46 Ringing Some filters just inverse poorly E.g. A box filter completely removes frequencies; inverse filter is therefore ill-posed This can results in what often looks like ringing or interference 46

47 What about noise? u, v F u, v H u, v N u v G, F u v Fˆ u, v, N u, v H u, v Noise often is attributed to the device.. Blurred image + noise That means noise happens after PSF or OTF is applied. Inverse filtering can make noise more noticeable. Inverse Filtering 47

48 Photography Preliminaries 48

49 Photography in a nutshell Focal Length Exposure and Aperture Depth of Field Noise 49

50 Focal length and field of view 50

51 Exposure Exposure is how much light hits the camera sensor Two ways to control this: Aperture: the hole in the optical path for the light Shutter speed: the time the hole is opened Aperture Controllable Shutter 51

52 Shutter speed and aperture Shutter speed Expressed in fraction of a second: 1/30, 1/60, 1/125, 1/250, 1/500 (in reality, 1/32, 1/64, 1/128, 1/256,... ) Aperture Expressed as ratio of aperture size to focal length (f-stop) f/2.0, f/2.8, f/4, f/5.6, f/8, f/11, f/16, f/22, f/32 f/x, means focal length is X times bigger than the aperture Each f-stop reduces the area of the aperture by half So, the larger the f-stop, the smaller the aperture We are going to see how these are related in the following slides. 52

53 Shutter speed and motion Slow shutter speeds can result in motion blur if the scene isn t static or if the camera moves or shakes. 53

54 Aperture and depth of field Focus Plane in the scene Sensor/Film Points outside the focal plane diverge on the sensor (circle of confusion) Sensor/F Film Closing the aperture reduces the circle of confusion.. i.e. it expands the depth of field. It also reduces the amount of light. Aperture controls depth of field (dof) 54

55 Main effect of aperture Bigger aperture = shallow depth of field. 55

56 Exposure The play between f-stop and shutter: Aperture (in f stop) Shutter speed (in fraction of a second) Reciprocity The same exposure is obtained with an exposure twice as long and an aperture area half as big Slide from Fredo Durand 56 From Photography, London et al.

57 Reciprocity cont Assume we know how much light we need We have the choice of an infinity of shutter speed/aperture pairs What will guide our choice of a shutter speed? Freeze motion vs. motion blur, camera shake What will guide our choice of an aperture? Depth of field Often we must compromise Open more to enable faster speed (but shallow DoF) Slide from Fredo Durand 57

58 Note trade-off in DoF for motion blur. From Photography, London et al. 58

59 Note trade-off in DoF for motion blur. From Photography, London et al. 59

60 Note trade-off in DoF for motion blur. From Photography, London et al. 60

61 CCD sensitivity (ISO) and noise One solution to low exposure from a fast shutter speed is to increase the camera s CCD signal (i.e. gain the signal) This is analogous to film ISO sensitivity ISO 100 (slow film), ISO 1600 (fast film, x16 more sensitive) The drawback? Amplifying the CDD signal, amplifies the sensor noise! bobatkins.com 61

62 Focal length Photography Equation Controls view Finessing motion blur, noise, and dof Trade-off between shutter speed and aperture Camera settings Motion Blur Artifacts DoF Noise fast-shutter speed No Narrow No wide aperture low ISO (gain) slow-shutter speed small aperture low ISO (gain) fast-shutter speed small aperture high ISO (gain) Yes Wide No No Wide Yes 62

63 Summary Part 1 Motivated computational photography Refresher in frequency enc domain, filtering, and restoration Crash course on photography terminology Saw the photographers dilemma regarding shutter-speed, aperture, and ISO settings 63

64 Part 1 Computational Optics 64

65 Ideas/Papers Discussed 1. Nayar et al, CVPR 97 (yes, its that old) Catadioptric Omnidirectional Camera 2. Raskar et al, SIGGRAPH06 Coded Exposure Photography: Motion Deblurring using Fluttered Shutter 3. Levin et al, SIGGRAPH07 Image and Depth from a Conventional Camera with a Coded Aperture 4. Nagahara et al, ECCV 08 Extended Depth of Field Imaging 5. Moshe and Nayar, CVPR 03, Tai et al, PAMI 09 Hybrid Camera 6. N. Joshi et al, SIGGRAPH10 Image deblurring with inertial measurement sensors 65

66 Catadioptric Omnidirectional Camera CVPR 1997 Shree K. Nayar While a Professor at Columbia University it Now, the T.C. Chang Professor (endowed professor) -PhD, Carnegie Mellon University, 90 -Masters, North Carolina State University, 86 -BS, Birla Institute of Technology, Ranchi, India, 84 Shree Nayar Idea How can we get a omni-directional view? Use a mirror? What should that mirror be? Parabolic surface. 66

67 Catadioptric Imaging The idea was not new: Using mirrors to aid imaging/viewing Common in telescope design Used in daily life 67

68 Previous work: Vic Nalwa Similar idea had been demonstrated by Vic Nalwa at Lucent Technologies a few years before (1995). 68

69 Other solutions existed Regular camera Move the camera to get a panorama Fish eye lens Various mirrors approaches 69

70 Sheer wanted a true omni-directional camera And he wanted to produce an orthogonal image on the sensor. 70

71 Orthogonal Mapping Image Sensor Light rays from scene 71

72 Parabola What surface to use? A parabola mirror surface reflects light that would be passing through a single point (the focus) in an orthogonal manner! 72

73 Nayar s Design Cut off the mirror at h/2. This will reflect all light in a hemi-sphere. (Only part not visible is where the sensor is actually placed). z is point on surface Equation of surface: h is max radius of mirror (or base size) r distance from v 73

74 The Omnicamera Top 360 Bottom 360 Single mirror Two cameras placed together, this gives a full 360 FOV. 74

75 Mapping to perspective To produce a perspective image, we just need to compute the rays mapping into the parabolic image. (x p,y p,z p ) 75

76 Perspective Mapping Can unwarp the input to perspective views. The only place there is a problem is where the sensor is looking at itself 76

77 Issues with Catadioptric Imaging Sampling rate is not the same over the omnicamera image Less samples here More samples here But that was in 1997, this is now... Now, there are many, many, catadioptric style cameras 77

78 Summary: Omni-Camera Early Computational Photography Work done before we needed a cool term like Comp Photo.. Approach employed a mirror to help increase field of view Requires computation ti to produce a virtual view Purposely modified the camera with the idea that computation would be applied 78

79 Coded Exposure (Flutter Shutter) Raskar et al, SIGGRAPH 2006 While a senior research at MERL Now at MIT Media Labs Idea Ramesh Conventional exposure makes deblurring hard Use a different exposure, i.e. a coded exposure Coded Exposure Properties of the new exposure are much better for deblurring Idea is simple and clever Raskar s specialty 79

80 Traditional Camera Shutter is OPEN [Modified notes from SIGGRAPH 06 presentation] 80

81 Their Camera Flutter Shutter 81

82 This photo is of Ramesh moving an LED. Note with conventional camera is looks like a circle, but with the coded exposure you can see the camera is taking pictures at various times. Comparison of Blurred Images 82

83 Sync Function Blurring == Convolution Fourier transform of the box filter. We have coefficients that drop to 0. WHAT DOES THAT MEAN? IS THAT BAD? (YES) Traditional Camera: Box Filter 83

84 Preserves High Frequencies!!! Fourier transform of the flutter shutter filter. The FFT is well conditioned. No frequencies mapped to 0. Flutter Shutter: Coded Filter 84

85 Comparison 85

86 Inverse Filter stable Some values explode to infinity! Inverse Filter Unstable Inverse Filter (1./H) 86

87 Idea is very simple Using a coded exposure The inverse filter of code is stable Much better than an open shutter (box filter) Allows us to deconvolve the image Also, you know the Kernel Remember from our image processing lecture The hard part is knowing the blur H So, you can use straight inverse filtering to get the answer Since the H is well behaved We don t even need Wiener filter 87

88 THE CAMERA! Used an LCD panel, but this idea can be done in hardware (shutters are electronic for digital cameras) Implementation Completely Portable 88

89 Are all codes good? All ones Alternate OK, but has some close to zero lopes Random Very well behaved Raskar s Code 89

90 Some Caveats Assumes a horizontal motion blur. But processing can be used to rectify the image first. 90

91 Coded Exposure Summary Simple and clever idea Excellent example of Computational Photography Can the imaging i approach and assume computation ti will be necessary What about under no-blur conditions? Image is slightly darker, since shutter is not open the entire time So, this approach is targeting a specific application But the idea is good Camera blur is a real problem If you can correct it well in software, many otherwise unusable images can be used Also, for particular apps this is code Surveillance 91

92 Coded Aperture SIGGRAPH 07 PhD 06, Hebrew University Published while a post-doc at MIT Now at Weizmann Institute of Science Idea Code the aperture This code makes identifying depth blur easier Paper has several contributions 1. Image priors for deblurring 2. Coded aperature for detecting blur effects 3. Depth reconstruction from contribution 2.) 4. Refocusing of pixels (all-in-focus image) Anat Levin 92

93 Coded Aperture Recall DoF blurring Points that are not in focus are blurred. This is called the circle of confusion. The circle is because apertures are round. infocus out of focus (dof blur) 93

94 Two advantages Coded PSF allows better discrimination between image patches suffering from DoF blur than conventional aperture PSF Coded PSF is better suited to for deconvolution than conventional aperture PSF 94

95 Determining Blur Scale Coded aperture- reduce uncertainty in scale identification Conventional Coded Larger scale Correct scale Smaller scale Slide from Anat s SIGGRPH Talk

96 Regularizing depth estimation Local depth estimation Input Regularized depth 305

97 Input

98 All-focused (deconvolved)

99 Comparison- conventional aperture result Ringing due to wrong scale estimation

100 Comparison- coded aperture result

101 Code Aperture Summary The paper is about a lot more than Coded Aperture Restoration using regularization Coded aperture Depth estimation, etc But the idea is to change the PSF of the conventional aperture to be more discriminatory and better conditioned for inverting (similar to the coded exposure idea) Another great example of computational photography Modification made with expect to use computation to improve/enhance the image 101

102 Extending Depth of Field Focus stacking Idea: Take lots of images with varying focus Combine images to build an all-in-focus image (i.e. extend the DoF) Input: Image sequence with varying focus. Output: All in focus image. 102

103 Considerations in focus stacking Image Alignment Recall that changing the focal plane changes the scale of the image Kande-Lucas matching is often used to first align the images Detecting in focus regions How do we know when something is in focus? Need a measure of sharpness Look at strong edge detector, consider points lying on edges Window operators Look at sum of gradient magnitude about a window at each point Handling Noise In-focus detection is usually noisy or sparse, need a method to smooth out the in-focus map MRF formulation data-cost = edge strength, smoothness cost = difference with neighbor labels (i.e. favor a smooth map) Compositing Finally combine the values can consider average equally sharp pixels from different images Bonus of focus stacking! Approximate depth estimation from focus map 103

104 Basic focus stacking algorithm Step 1 Step 2 Step 3 Step 4 Step 5 Build a in-focus map (i.e. pixels map to image that is the sharpest) Smooth result (step 3 is generally noisy, should smooth the results) Composite image (based on result from step 4) Align images (if necessary) Perform sharpness measure on each image 104

105 Some examples You can find many examples, just google: focus stacking or focal stacking 105

106 Wikipedia Example Focal stacking is very common for microscopy. Near Far Composite Very simple example using only 2 DoF image (near and far) 106

107 Flexible Depth of Field Imaging ECCV 2008 Hajime Nagahara et al. Assoc Prof at Osaka University, Japan While Visiting Scientist at Columbia Hajime Idea Uses a translating sensor Integrate image The PSF of this integration is depth invariant Deconvolve image to obtain an extended DoF 107

108 Basic Idea: Integrating g response from moving sensor As we just saw, most focus stacking take several individual images exhibiting spatially varying blurring due to scene depth. This is done my moving the sensor (i.e. changing the focal plane) while capturing the image. Nagahara s camera moves the sensor in a continuous linear motion (at uniform speed) and integrates the responses to produce a single output image. 108

109 Conventional DoF PSF and IDoF PSF Lens 450mm Focal Plane 1100mm Lens Focal Plane 450mm 1100mm Se ensor 550mm 750mm 2000mm Sensor 550mm 750mm 2000mm Conventional camera focused at 750mm Translation integrated camera Conventional Depth of Field Point Spread Function (modelled as pillbox PSF) Integrated Camera Corresponding integrated DoF (IDOF) point spread function If we model the DoF PSF as a pillbox function, this is the PSF for varying depths for a camera focused at the 750mm distance. Note how the PSF is different for each point. What is this graph showing us? It shows the resulting blur function for points at varying depth. Notice anything? They are the same PSF no matter where the point is located! This is a result of the integration. 109

110 Conventional DoF PSF and IDoF PSF If we model the DoF PSF as a Gaussian function, this is the PSF for varying depths for a camera focused at the 750mm distance. Note how the PSF is different for each point. Similar PSF for the translating camera if we model DoF as a Gaussian instead of a pillbox function. This implies that the PSF for the idof is invariant to the depth. 110

111 What does it mean? The blur induced by the translation + integration is depth invariant, i.e. the same PSF is applied to all points, no matter their depth (shown on the previous 2 slides) We can deconvole the image with a single PSF to obtain the extended Depth of Field image Captured image looks blurry, since it is integrated t over the focus sweep... It has been blurred by the idof PSF. But, after deconolving the image with the known the idof PSF, the image is now in focus with an extended DoF. 111

112 Some Examples 1 2 1: Image captured with translating sensor 2: deconvolved d result 3: conventional camera 3 4 using the same aperture and exposure setting as (1) 4: conventional camera using a smaller aperture, e, but same exposure, image has better DoF due to small aperture but image is darker due to the short expsoure, so intensities must be scaled, revealing noise. 112

113 Some Examples 1 2 1: Image captured with translating sensor 2: deconvolved d result 3 4 3: conventional camera using the same aperture and exposure setting as (1) 4: conventional camera using a smaller aperture, e, but same exposure, image has better DoF due to small aperture but image is darker due to the short expsoure, so intensities must be scaled, revealing noise. 113

114 Extended DoF Summary Classic Focus Stacking Approaches This is a very useful trick commonly used in photography Several softwares available online to combine images Adobe Photoshop supports this (so it must be important) Examined the integrated DoF approach By integrating the response of the moving sensor we can convert the image into a constant PSF no matter the depth Advantage over Levin s approach? No need to estimate the depth position Disadvantage over Levin s approach? No way to obtain the depth map Have a look at the paper, they do several over very clever tricks too 114

115 Motion Deblurring Using Hybrid Imaging CVPR 2003 Moshe Ben-Ezra While a research scientist at Columbia University Now, at Microsoft Research Asia -PhD, Hebrew University 2000 CVPR 08 PAMI 09 Yu Wing Tai While my student and intern at MSR-Asia PhD, NUS 2009, now at KAIST/Korea SIGGRAPH 10 Neel Joshi Microsoft Research Redmond PhD, UCSD 08, now researcher at MSR Moshe Yu-Wing Idea Motion blur from camera motion is a problem? We don t know the blur? Neel Use a cheap auxiliary camera to capture the scene at low-resolution for high- frame-rate to quickly to compute the motion 115

116 Problem Blurring from camera motion (scene is still static) is a problem. 116

117 Deconvolution can be used, but.. What is the motion? 117

118 Combine two cameras High-res camera (low frame-rate) Moshe s design Low-res camera (high frame-rate) Split optics between two camera. Tai s design Inertial Measurement Sensors Neel s design 118

119 Motion computation Low-resolution fast frames Frame t Frame t+1 Frame t+2 Frame t+(n-1) Frame t+n Motion between frames considered global translation. (Standard d template t matching) Combining this piecewise translation gives a global compensation over the whole time that can be very complex. 119

120 Example Convolution matrix (H), i.e. the motion blur -> 120

121 Example Convolution matrix (H), i.e. the motion blur -> 121

122 The motion blur This is basically the motion of the camera as experienced by the high-resolution o image s pixel over the exposure e time. 122

123 Deblurring Using standard deconvolution with the estimated PSF. 123

124 Deblurring Using standard deconvolution with the estimated PSF. 124

125 Tai s extension Beyond Camera Shake Use high-frame rate camera to compute optical flow. Tai generated per-pixel pixel convolution (h) 125

126 Tai s Optimization Procedure Global Invariant Kernel (Hand Shaking) Deconvolution Eq. Low Resolution Reg. Kernel Reg. Spatially varying Kernels (Object Moving) Reformulated problem to perform spatial varying deconvolution + regularization against the low-res images. This slides is just to prove we are smart. 126

127 Why the last slide? To show that computational photography leads to new mathematical innovations on classical problems Previously, spatially varying deconvolution was not popular Why? Impossible to acquire the necessary information Computational photography design makes this doable 127

128 Spatially Varying Deblurring Blurry Spatially Varying Kernels (Single Depth Plane) Deblurred Using Correct Kernel From Neel Joshi s SIGGRAPH talk 128 Deblurred Using Center Kernel Neel Joshi, SIGGRAPH 2010

129 Hybrid Imaging Summary Idea is simple Should be able to do this on a single chip Will just take a redesign of the CCD sampling Allows camera ego motion to be computed reasonably accurately Produces good results Type of motion is limited though Global in-plane translation is limited Extended by Tai to do spatially varying blur Extended by Joshi to use inertial sensors 129

130 Computational Optics Summary Exploit the fact that images will be processed Early work, involved simple image warping (omni- camera) Raskar showed a very simple modification to the exposure made 1D motion deblurring significant better Anat followed by redesigning the aperture to improve DoF applications Hajime produced an image that is completely undesirable unless processed. By moving the sensor he induced d a depth invariant i blur for extended d DoF. Moshe (and others) exploited auxillary information in the processing to address deblurring 130

Deblurring. Basics, Problem definition and variants

Deblurring. Basics, Problem definition and variants Deblurring Basics, Problem definition and variants Kinds of blur Hand-shake Defocus Credit: Kenneth Josephson Motion Credit: Kenneth Josephson Kinds of blur Spatially invariant vs. Spatially varying

More information

Coded photography , , Computational Photography Fall 2018, Lecture 14

Coded photography , , Computational Photography Fall 2018, Lecture 14 Coded photography http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 14 Overview of today s lecture The coded photography paradigm. Dealing with

More information

Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing

Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing Ashok Veeraraghavan, Ramesh Raskar, Ankit Mohan & Jack Tumblin Amit Agrawal, Mitsubishi Electric Research

More information

Coded photography , , Computational Photography Fall 2017, Lecture 18

Coded photography , , Computational Photography Fall 2017, Lecture 18 Coded photography http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 18 Course announcements Homework 5 delayed for Tuesday. - You will need cameras

More information

Coded Computational Photography!

Coded Computational Photography! Coded Computational Photography! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 9! Gordon Wetzstein! Stanford University! Coded Computational Photography - Overview!!

More information

Computational Cameras. Rahul Raguram COMP

Computational Cameras. Rahul Raguram COMP Computational Cameras Rahul Raguram COMP 790-090 What is a computational camera? Camera optics Camera sensor 3D scene Traditional camera Final image Modified optics Camera sensor Image Compute 3D scene

More information

Computational Approaches to Cameras

Computational Approaches to Cameras Computational Approaches to Cameras 11/16/17 Magritte, The False Mirror (1935) Computational Photography Derek Hoiem, University of Illinois Announcements Final project proposal due Monday (see links on

More information

Deconvolution , , Computational Photography Fall 2017, Lecture 17

Deconvolution , , Computational Photography Fall 2017, Lecture 17 Deconvolution http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 17 Course announcements Homework 4 is out. - Due October 26 th. - There was another

More information

Deconvolution , , Computational Photography Fall 2018, Lecture 12

Deconvolution , , Computational Photography Fall 2018, Lecture 12 Deconvolution http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 12 Course announcements Homework 3 is out. - Due October 12 th. - Any questions?

More information

Coding and Modulation in Cameras

Coding and Modulation in Cameras Coding and Modulation in Cameras Amit Agrawal June 2010 Mitsubishi Electric Research Labs (MERL) Cambridge, MA, USA Coded Computational Imaging Agrawal, Veeraraghavan, Narasimhan & Mohan Schedule Introduction

More information

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring Ashill Chiranjan and Bernardt Duvenhage Defence, Peace, Safety and Security Council for Scientific

More information

Frequency Domain Enhancement

Frequency Domain Enhancement Tutorial Report Frequency Domain Enhancement Page 1 of 21 Frequency Domain Enhancement ESE 558 - DIGITAL IMAGE PROCESSING Tutorial Report Instructor: Murali Subbarao Written by: Tutorial Report Frequency

More information

Admin Deblurring & Deconvolution Different types of blur

Admin Deblurring & Deconvolution Different types of blur Admin Assignment 3 due Deblurring & Deconvolution Lecture 10 Last lecture Move to Friday? Projects Come and see me Different types of blur Camera shake User moving hands Scene motion Objects in the scene

More information

Project 4 Results http://www.cs.brown.edu/courses/cs129/results/proj4/jcmace/ http://www.cs.brown.edu/courses/cs129/results/proj4/damoreno/ http://www.cs.brown.edu/courses/csci1290/results/proj4/huag/

More information

Computational Camera & Photography: Coded Imaging

Computational Camera & Photography: Coded Imaging Computational Camera & Photography: Coded Imaging Camera Culture Ramesh Raskar MIT Media Lab http://cameraculture.media.mit.edu/ Image removed due to copyright restrictions. See Fig. 1, Eight major types

More information

Computational Photography and Video. Prof. Marc Pollefeys

Computational Photography and Video. Prof. Marc Pollefeys Computational Photography and Video Prof. Marc Pollefeys Today s schedule Introduction of Computational Photography Course facts Syllabus Digital Photography What is computational photography Convergence

More information

multiframe visual-inertial blur estimation and removal for unmodified smartphones

multiframe visual-inertial blur estimation and removal for unmodified smartphones multiframe visual-inertial blur estimation and removal for unmodified smartphones, Severin Münger, Carlo Beltrame, Luc Humair WSCG 2015, Plzen, Czech Republic images taken by non-professional photographers

More information

DIGITAL IMAGE PROCESSING UNIT III

DIGITAL IMAGE PROCESSING UNIT III DIGITAL IMAGE PROCESSING UNIT III 3.1 Image Enhancement in Frequency Domain: Frequency refers to the rate of repetition of some periodic events. In image processing, spatial frequency refers to the variation

More information

Recent Advances in Image Deblurring. Seungyong Lee (Collaboration w/ Sunghyun Cho)

Recent Advances in Image Deblurring. Seungyong Lee (Collaboration w/ Sunghyun Cho) Recent Advances in Image Deblurring Seungyong Lee (Collaboration w/ Sunghyun Cho) Disclaimer Many images and figures in this course note have been copied from the papers and presentation materials of previous

More information

High dynamic range imaging and tonemapping

High dynamic range imaging and tonemapping High dynamic range imaging and tonemapping http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 12 Course announcements Homework 3 is out. - Due

More information

Computational Photography

Computational Photography Computational photography Computational Photography Digital Visual Effects Yung-Yu Chuang wikipedia: Computational photography h refers broadly to computational imaging techniques that enhance or extend

More information

Coded Aperture for Projector and Camera for Robust 3D measurement

Coded Aperture for Projector and Camera for Robust 3D measurement Coded Aperture for Projector and Camera for Robust 3D measurement Yuuki Horita Yuuki Matugano Hiroki Morinaga Hiroshi Kawasaki Satoshi Ono Makoto Kimura Yasuo Takane Abstract General active 3D measurement

More information

Burst Photography! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 7! Gordon Wetzstein! Stanford University!

Burst Photography! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 7! Gordon Wetzstein! Stanford University! Burst Photography! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 7! Gordon Wetzstein! Stanford University! Motivation! wikipedia! exposure sequence! -4 stops! Motivation!

More information

Toward Non-stationary Blind Image Deblurring: Models and Techniques

Toward Non-stationary Blind Image Deblurring: Models and Techniques Toward Non-stationary Blind Image Deblurring: Models and Techniques Ji, Hui Department of Mathematics National University of Singapore NUS, 30-May-2017 Outline of the talk Non-stationary Image blurring

More information

Tonemapping and bilateral filtering

Tonemapping and bilateral filtering Tonemapping and bilateral filtering http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 6 Course announcements Homework 2 is out. - Due September

More information

TDI2131 Digital Image Processing

TDI2131 Digital Image Processing TDI131 Digital Image Processing Frequency Domain Filtering Lecture 6 John See Faculty of Information Technology Multimedia University Some portions of content adapted from Zhu Liu, AT&T Labs. Most figures

More information

CoE4TN4 Image Processing. Chapter 4 Filtering in the Frequency Domain

CoE4TN4 Image Processing. Chapter 4 Filtering in the Frequency Domain CoE4TN4 Image Processing Chapter 4 Filtering in the Frequency Domain Fourier Transform Sections 4.1 to 4.5 will be done on the board 2 2D Fourier Transform 3 2D Sampling and Aliasing 4 2D Sampling and

More information

Lenses, exposure, and (de)focus

Lenses, exposure, and (de)focus Lenses, exposure, and (de)focus http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 15 Course announcements Homework 4 is out. - Due October 26

More information

Near-Invariant Blur for Depth and 2D Motion via Time-Varying Light Field Analysis

Near-Invariant Blur for Depth and 2D Motion via Time-Varying Light Field Analysis Near-Invariant Blur for Depth and 2D Motion via Time-Varying Light Field Analysis Yosuke Bando 1,2 Henry Holtzman 2 Ramesh Raskar 2 1 Toshiba Corporation 2 MIT Media Lab Defocus & Motion Blur PSF Depth

More information

Image Deblurring. This chapter describes how to deblur an image using the toolbox deblurring functions.

Image Deblurring. This chapter describes how to deblur an image using the toolbox deblurring functions. 12 Image Deblurring This chapter describes how to deblur an image using the toolbox deblurring functions. Understanding Deblurring (p. 12-2) Using the Deblurring Functions (p. 12-5) Avoiding Ringing in

More information

Image Restoration. Lecture 7, March 23 rd, Lexing Xie. EE4830 Digital Image Processing

Image Restoration. Lecture 7, March 23 rd, Lexing Xie. EE4830 Digital Image Processing Image Restoration Lecture 7, March 23 rd, 2009 Lexing Xie EE4830 Digital Image Processing http://www.ee.columbia.edu/~xlx/ee4830/ thanks to G&W website, Min Wu and others for slide materials 1 Announcements

More information

6.098 Digital and Computational Photography Advanced Computational Photography. Bill Freeman Frédo Durand MIT - EECS

6.098 Digital and Computational Photography Advanced Computational Photography. Bill Freeman Frédo Durand MIT - EECS 6.098 Digital and Computational Photography 6.882 Advanced Computational Photography Bill Freeman Frédo Durand MIT - EECS Administrivia PSet 1 is out Due Thursday February 23 Digital SLR initiation? During

More information

Digital Image Processing. Image Enhancement: Filtering in the Frequency Domain

Digital Image Processing. Image Enhancement: Filtering in the Frequency Domain Digital Image Processing Image Enhancement: Filtering in the Frequency Domain 2 Contents In this lecture we will look at image enhancement in the frequency domain Jean Baptiste Joseph Fourier The Fourier

More information

Agenda. Fusion and Reconstruction. Image Fusion & Reconstruction. Image Fusion & Reconstruction. Dr. Yossi Rubner.

Agenda. Fusion and Reconstruction. Image Fusion & Reconstruction. Image Fusion & Reconstruction. Dr. Yossi Rubner. Fusion and Reconstruction Dr. Yossi Rubner yossi@rubner.co.il Some slides stolen from: Jack Tumblin 1 Agenda We ve seen Panorama (from different FOV) Super-resolution (from low-res) HDR (from different

More information

High Dynamic Range Imaging

High Dynamic Range Imaging High Dynamic Range Imaging 1 2 Lecture Topic Discuss the limits of the dynamic range in current imaging and display technology Solutions 1. High Dynamic Range (HDR) Imaging Able to image a larger dynamic

More information

A Review over Different Blur Detection Techniques in Image Processing

A Review over Different Blur Detection Techniques in Image Processing A Review over Different Blur Detection Techniques in Image Processing 1 Anupama Sharma, 2 Devarshi Shukla 1 E.C.E student, 2 H.O.D, Department of electronics communication engineering, LR College of engineering

More information

The ultimate camera. Computational Photography. Creating the ultimate camera. The ultimate camera. What does it do?

The ultimate camera. Computational Photography. Creating the ultimate camera. The ultimate camera. What does it do? Computational Photography The ultimate camera What does it do? Image from Durand & Freeman s MIT Course on Computational Photography Today s reading Szeliski Chapter 9 The ultimate camera Infinite resolution

More information

SUPER RESOLUTION INTRODUCTION

SUPER RESOLUTION INTRODUCTION SUPER RESOLUTION Jnanavardhini - Online MultiDisciplinary Research Journal Ms. Amalorpavam.G Assistant Professor, Department of Computer Sciences, Sambhram Academy of Management. Studies, Bangalore Abstract:-

More information

Improved motion invariant imaging with time varying shutter functions

Improved motion invariant imaging with time varying shutter functions Improved motion invariant imaging with time varying shutter functions Steve Webster a and Andrew Dorrell b Canon Information Systems Research, Australia (CiSRA), Thomas Holt Drive, North Ryde, Australia

More information

Computational Photography Introduction

Computational Photography Introduction Computational Photography Introduction Jongmin Baek CS 478 Lecture Jan 9, 2012 Background Sales of digital cameras surpassed sales of film cameras in 2004. Digital cameras are cool Free film Instant display

More information

Be aware that there is no universal notation for the various quantities.

Be aware that there is no universal notation for the various quantities. Fourier Optics v2.4 Ray tracing is limited in its ability to describe optics because it ignores the wave properties of light. Diffraction is needed to explain image spatial resolution and contrast and

More information

Image Deblurring and Noise Reduction in Python TJHSST Senior Research Project Computer Systems Lab

Image Deblurring and Noise Reduction in Python TJHSST Senior Research Project Computer Systems Lab Image Deblurring and Noise Reduction in Python TJHSST Senior Research Project Computer Systems Lab 2009-2010 Vincent DeVito June 16, 2010 Abstract In the world of photography and machine vision, blurry

More information

Transfer Efficiency and Depth Invariance in Computational Cameras

Transfer Efficiency and Depth Invariance in Computational Cameras Transfer Efficiency and Depth Invariance in Computational Cameras Jongmin Baek Stanford University IEEE International Conference on Computational Photography 2010 Jongmin Baek (Stanford University) Transfer

More information

1.Discuss the frequency domain techniques of image enhancement in detail.

1.Discuss the frequency domain techniques of image enhancement in detail. 1.Discuss the frequency domain techniques of image enhancement in detail. Enhancement In Frequency Domain: The frequency domain methods of image enhancement are based on convolution theorem. This is represented

More information

Image Enhancement. DD2423 Image Analysis and Computer Vision. Computational Vision and Active Perception School of Computer Science and Communication

Image Enhancement. DD2423 Image Analysis and Computer Vision. Computational Vision and Active Perception School of Computer Science and Communication Image Enhancement DD2423 Image Analysis and Computer Vision Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 15, 2013 Mårten Björkman (CVAP)

More information

fast blur removal for wearable QR code scanners

fast blur removal for wearable QR code scanners fast blur removal for wearable QR code scanners Gábor Sörös, Stephan Semmler, Luc Humair, Otmar Hilliges ISWC 2015, Osaka, Japan traditional barcode scanning next generation barcode scanning ubiquitous

More information

4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES

4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES 4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES Abstract: This paper attempts to undertake the study of deblurring techniques for Restored Motion Blurred Images by using: Wiener filter,

More information

Recent advances in deblurring and image stabilization. Michal Šorel Academy of Sciences of the Czech Republic

Recent advances in deblurring and image stabilization. Michal Šorel Academy of Sciences of the Czech Republic Recent advances in deblurring and image stabilization Michal Šorel Academy of Sciences of the Czech Republic Camera shake stabilization Alternative to OIS (optical image stabilization) systems Should work

More information

Basic principles of photography. David Capel 346B IST

Basic principles of photography. David Capel 346B IST Basic principles of photography David Capel 346B IST Latin Camera Obscura = Dark Room Light passing through a small hole produces an inverted image on the opposite wall Safely observing the solar eclipse

More information

Enhanced Method for Image Restoration using Spatial Domain

Enhanced Method for Image Restoration using Spatial Domain Enhanced Method for Image Restoration using Spatial Domain Gurpal Kaur Department of Electronics and Communication Engineering SVIET, Ramnagar,Banur, Punjab, India Ashish Department of Electronics and

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Filtering in the Frequency Domain (Application) Christophoros Nikou cnikou@cs.uoi.gr University of Ioannina - Department of Computer Science and Engineering 2 Periodicity of the

More information

Implementation of Image Deblurring Techniques in Java

Implementation of Image Deblurring Techniques in Java Implementation of Image Deblurring Techniques in Java Peter Chapman Computer Systems Lab 2007-2008 Thomas Jefferson High School for Science and Technology Alexandria, Virginia January 22, 2008 Abstract

More information

To Do. Advanced Computer Graphics. Outline. Computational Imaging. How do we see the world? Pinhole camera

To Do. Advanced Computer Graphics. Outline. Computational Imaging. How do we see the world? Pinhole camera Advanced Computer Graphics CSE 163 [Spring 2017], Lecture 14 Ravi Ramamoorthi http://www.cs.ucsd.edu/~ravir To Do Assignment 2 due May 19 Any last minute issues or questions? Next two lectures: Imaging,

More information

When Does Computational Imaging Improve Performance?

When Does Computational Imaging Improve Performance? When Does Computational Imaging Improve Performance? Oliver Cossairt Assistant Professor Northwestern University Collaborators: Mohit Gupta, Changyin Zhou, Daniel Miau, Shree Nayar (Columbia University)

More information

Midterm Review. Image Processing CSE 166 Lecture 10

Midterm Review. Image Processing CSE 166 Lecture 10 Midterm Review Image Processing CSE 166 Lecture 10 Topics covered Image acquisition, geometric transformations, and image interpolation Intensity transformations Spatial filtering Fourier transform and

More information

Removing Temporal Stationary Blur in Route Panoramas

Removing Temporal Stationary Blur in Route Panoramas Removing Temporal Stationary Blur in Route Panoramas Jiang Yu Zheng and Min Shi Indiana University Purdue University Indianapolis jzheng@cs.iupui.edu Abstract The Route Panorama is a continuous, compact

More information

Lecture #10. EECS490: Digital Image Processing

Lecture #10. EECS490: Digital Image Processing Lecture #10 Wraparound and padding Image Correlation Image Processing in the frequency domain A simple frequency domain filter Frequency domain filters High-pass, low-pass Apodization Zero-phase filtering

More information

Optical image stabilization (IS)

Optical image stabilization (IS) Optical image stabilization (IS) CS 178, Spring 2010 Marc Levoy Computer Science Department Stanford University Outline! what are the causes of camera shake? how can you avoid it (without having an IS

More information

Cameras. Digital Visual Effects, Spring 2008 Yung-Yu Chuang 2008/2/26. with slides by Fredo Durand, Brian Curless, Steve Seitz and Alexei Efros

Cameras. Digital Visual Effects, Spring 2008 Yung-Yu Chuang 2008/2/26. with slides by Fredo Durand, Brian Curless, Steve Seitz and Alexei Efros Cameras Digital Visual Effects, Spring 2008 Yung-Yu Chuang 2008/2/26 with slides by Fredo Durand, Brian Curless, Steve Seitz and Alexei Efros Camera trial #1 scene film Put a piece of film in front of

More information

Image acquisition. Midterm Review. Digitization, line of image. Digitization, whole image. Geometric transformations. Interpolation 10/26/2016

Image acquisition. Midterm Review. Digitization, line of image. Digitization, whole image. Geometric transformations. Interpolation 10/26/2016 Image acquisition Midterm Review Image Processing CSE 166 Lecture 10 2 Digitization, line of image Digitization, whole image 3 4 Geometric transformations Interpolation CSE 166 Transpose these matrices

More information

Cameras. Shrinking the aperture. Camera trial #1. Pinhole camera. Digital Visual Effects Yung-Yu Chuang. Put a piece of film in front of an object.

Cameras. Shrinking the aperture. Camera trial #1. Pinhole camera. Digital Visual Effects Yung-Yu Chuang. Put a piece of film in front of an object. Camera trial #1 Cameras Digital Visual Effects Yung-Yu Chuang scene film with slides by Fredo Durand, Brian Curless, Steve Seitz and Alexei Efros Put a piece of film in front of an object. Pinhole camera

More information

Image Deblurring with Blurred/Noisy Image Pairs

Image Deblurring with Blurred/Noisy Image Pairs Image Deblurring with Blurred/Noisy Image Pairs Huichao Ma, Buping Wang, Jiabei Zheng, Menglian Zhou April 26, 2013 1 Abstract Photos taken under dim lighting conditions by a handheld camera are usually

More information

Cameras and Sensors. Today. Today. It receives light from all directions. BIL721: Computational Photography! Spring 2015, Lecture 2!

Cameras and Sensors. Today. Today. It receives light from all directions. BIL721: Computational Photography! Spring 2015, Lecture 2! !! Cameras and Sensors Today Pinhole camera! Lenses! Exposure! Sensors! photo by Abelardo Morell BIL721: Computational Photography! Spring 2015, Lecture 2! Aykut Erdem! Hacettepe University! Computer Vision

More information

EEL 6562 Image Processing and Computer Vision Image Restoration

EEL 6562 Image Processing and Computer Vision Image Restoration DEPARTMENT OF ELECTRICAL & COMPUTER ENGINEERING EEL 6562 Image Processing and Computer Vision Image Restoration Rajesh Pydipati Introduction Image Processing is defined as the analysis, manipulation, storage,

More information

A Framework for Analysis of Computational Imaging Systems

A Framework for Analysis of Computational Imaging Systems A Framework for Analysis of Computational Imaging Systems Kaushik Mitra, Oliver Cossairt, Ashok Veeraghavan Rice University Northwestern University Computational imaging CI systems that adds new functionality

More information

Digital Image Processing. Filtering in the Frequency Domain (Application)

Digital Image Processing. Filtering in the Frequency Domain (Application) Digital Image Processing Filtering in the Frequency Domain (Application) Christophoros Nikou cnikou@cs.uoi.gr University of Ioannina - Department of Computer Science 2 Periodicity of the DFT The range

More information

Unit 1: Image Formation

Unit 1: Image Formation Unit 1: Image Formation 1. Geometry 2. Optics 3. Photometry 4. Sensor Readings Szeliski 2.1-2.3 & 6.3.5 1 Physical parameters of image formation Geometric Type of projection Camera pose Optical Sensor

More information

To start there are three key properties that you need to understand: ISO (sensitivity)

To start there are three key properties that you need to understand: ISO (sensitivity) Some Photo Fundamentals Photography is at once relatively simple and technically confusing at the same time. The camera is basically a black box with a hole in its side camera comes from camera obscura,

More information

Wavefront coding. Refocusing & Light Fields. Wavefront coding. Final projects. Is depth of field a blur? Frédo Durand Bill Freeman MIT - EECS

Wavefront coding. Refocusing & Light Fields. Wavefront coding. Final projects. Is depth of field a blur? Frédo Durand Bill Freeman MIT - EECS 6.098 Digital and Computational Photography 6.882 Advanced Computational Photography Final projects Send your slides by noon on Thrusday. Send final report Refocusing & Light Fields Frédo Durand Bill Freeman

More information

Optical image stabilization (IS)

Optical image stabilization (IS) Optical image stabilization (IS) CS 178, Spring 2011 Marc Levoy Computer Science Department Stanford University Outline! what are the causes of camera shake? how can you avoid it (without having an IS

More information

Aperture & ƒ/stop Worksheet

Aperture & ƒ/stop Worksheet Tools and Program Needed: Digital C. Computer USB Drive Bridge PhotoShop Name: Manipulating Depth-of-Field Aperture & stop Worksheet The aperture setting (AV on the dial) is a setting to control the amount

More information

8. Lecture. Image restoration: Fourier domain

8. Lecture. Image restoration: Fourier domain 8. Lecture Image restoration: Fourier domain 1 Structured noise 2 Motion blur 3 Filtering in the Fourier domain ² Spatial ltering (average, Gaussian,..) can be done in the Fourier domain (convolution theorem)

More information

Restoration of Motion Blurred Document Images

Restoration of Motion Blurred Document Images Restoration of Motion Blurred Document Images Bolan Su 12, Shijian Lu 2 and Tan Chew Lim 1 1 Department of Computer Science,School of Computing,National University of Singapore Computing 1, 13 Computing

More information

Photographing Long Scenes with Multiviewpoint

Photographing Long Scenes with Multiviewpoint Photographing Long Scenes with Multiviewpoint Panoramas A. Agarwala, M. Agrawala, M. Cohen, D. Salesin, R. Szeliski Presenter: Stacy Hsueh Discussant: VasilyVolkov Motivation Want an image that shows an

More information

Lens Openings & Shutter Speeds

Lens Openings & Shutter Speeds Illustrations courtesy Life Magazine Encyclopedia of Photography Lens Openings & Shutter Speeds Controlling Exposure & the Rendering of Space and Time Equal Lens Openings/ Double Exposure Time Here is

More information

Image Restoration. Lecture 7, March 23 rd, Lexing Xie. EE4830 Digital Image Processing

Image Restoration. Lecture 7, March 23 rd, Lexing Xie. EE4830 Digital Image Processing Image Restoration Lecture 7, March 23 rd, 2008 Lexing Xie EE4830 Digital Image Processing http://www.ee.columbia.edu/~xlx/ee4830/ thanks to G&W website, Min Wu and others for slide materials 1 Announcements

More information

Image Enhancement of Low-light Scenes with Near-infrared Flash Images

Image Enhancement of Low-light Scenes with Near-infrared Flash Images Research Paper Image Enhancement of Low-light Scenes with Near-infrared Flash Images Sosuke Matsui, 1 Takahiro Okabe, 1 Mihoko Shimano 1, 2 and Yoichi Sato 1 We present a novel technique for enhancing

More information

CS534 Introduction to Computer Vision. Linear Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University

CS534 Introduction to Computer Vision. Linear Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University CS534 Introduction to Computer Vision Linear Filters Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What are Filters Linear Filters Convolution operation Properties of Linear Filters

More information

Digital Imaging Systems for Historical Documents

Digital Imaging Systems for Historical Documents Digital Imaging Systems for Historical Documents Improvement Legibility by Frequency Filters Kimiyoshi Miyata* and Hiroshi Kurushima** * Department Museum Science, ** Department History National Museum

More information

6.A44 Computational Photography

6.A44 Computational Photography Add date: Friday 6.A44 Computational Photography Depth of Field Frédo Durand We allow for some tolerance What happens when we close the aperture by two stop? Aperture diameter is divided by two is doubled

More information

Getting light to imager. Capturing Images. Depth and Distance. Ideal Imaging. CS559 Lecture 2 Lights, Cameras, Eyes

Getting light to imager. Capturing Images. Depth and Distance. Ideal Imaging. CS559 Lecture 2 Lights, Cameras, Eyes CS559 Lecture 2 Lights, Cameras, Eyes Last time: what is an image idea of image-based (raster representation) Today: image capture/acquisition, focus cameras and eyes displays and intensities Corrected

More information

CAMERA BASICS. Stops of light

CAMERA BASICS. Stops of light CAMERA BASICS Stops of light A stop of light isn t a quantifiable measurement it s a relative measurement. A stop of light is defined as a doubling or halving of any quantity of light. The word stop is

More information

Changyin Zhou. Ph.D, Computer Science, Columbia University Oct 2012

Changyin Zhou. Ph.D, Computer Science, Columbia University Oct 2012 Changyin Zhou Software Engineer at Google X Google Inc. 1600 Amphitheater Parkway, Mountain View, CA 94043 E-mail: changyin@google.com URL: http://www.changyin.org Office: (917) 209-9110 Mobile: (646)

More information

Image Enhancement of Low-light Scenes with Near-infrared Flash Images

Image Enhancement of Low-light Scenes with Near-infrared Flash Images IPSJ Transactions on Computer Vision and Applications Vol. 2 215 223 (Dec. 2010) Research Paper Image Enhancement of Low-light Scenes with Near-infrared Flash Images Sosuke Matsui, 1 Takahiro Okabe, 1

More information

Mastering Y our Your Digital Camera

Mastering Y our Your Digital Camera Mastering Your Digital Camera The Exposure Triangle The ISO setting on your camera defines how sensitive it is to light. Normally ISO 100 is the least sensitive setting on your camera and as the ISO numbers

More information

IMAGE ENHANCEMENT IN SPATIAL DOMAIN

IMAGE ENHANCEMENT IN SPATIAL DOMAIN A First Course in Machine Vision IMAGE ENHANCEMENT IN SPATIAL DOMAIN By: Ehsan Khoramshahi Definitions The principal objective of enhancement is to process an image so that the result is more suitable

More information

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics IMAGE FORMATION Light source properties Sensor characteristics Surface Exposure shape Optics Surface reflectance properties ANALOG IMAGES An image can be understood as a 2D light intensity function f(x,y)

More information

Defocusing and Deblurring by Using with Fourier Transfer

Defocusing and Deblurring by Using with Fourier Transfer Defocusing and Deblurring by Using with Fourier Transfer AKIRA YANAGAWA and TATSUYA KATO 1. Introduction Image data may be obtained through an image system, such as a video camera or a digital still camera.

More information

Computational Photography: Illumination Part 2. Brown 1

Computational Photography: Illumination Part 2. Brown 1 Computational Photography: Illumination Part 2 Brown 1 Lecture Topic Discuss ways to use illumination with further processing Three examples: 1. Flash/No-flash imaging for low-light photography (As well

More information

High Contrast Imaging

High Contrast Imaging High Contrast Imaging Suppressing diffraction (rings and other patterns) Doing this without losing light Suppressing scattered light Doing THIS without losing light Diffraction rings arise from the abrupt

More information

EE4830 Digital Image Processing Lecture 7. Image Restoration. March 19 th, 2007 Lexing Xie ee.columbia.edu>

EE4830 Digital Image Processing Lecture 7. Image Restoration. March 19 th, 2007 Lexing Xie ee.columbia.edu> EE4830 Digital Image Processing Lecture 7 Image Restoration March 19 th, 2007 Lexing Xie 1 We have covered 2 Image sensing Image Restoration Image Transform and Filtering Spatial

More information

International Journal of Advancedd Research in Biology, Ecology, Science and Technology (IJARBEST)

International Journal of Advancedd Research in Biology, Ecology, Science and Technology (IJARBEST) Gaussian Blur Removal in Digital Images A.Elakkiya 1, S.V.Ramyaa 2 PG Scholars, M.E. VLSI Design, SSN College of Engineering, Rajiv Gandhi Salai, Kalavakkam 1,2 Abstract In many imaging systems, the observed

More information

An Introduction to. Photographic Exposure: Aperture, ISO and Shutter Speed

An Introduction to. Photographic Exposure: Aperture, ISO and Shutter Speed An Introduction to Photographic Exposure: Aperture, ISO and Shutter Speed EXPOSURE Exposure relates to light and how it enters and interacts with the camera. Too much light Too little light EXPOSURE The

More information

Introduction to camera usage. The universal manual controls of most cameras

Introduction to camera usage. The universal manual controls of most cameras Introduction to camera usage A camera in its barest form is simply a light tight container that utilizes a lens with iris, a shutter that has variable speeds, and contains a sensitive piece of media, either

More information

Flash Photography: 1

Flash Photography: 1 Flash Photography: 1 Lecture Topic Discuss ways to use illumination with further processing Three examples: 1. Flash/No-flash imaging for low-light photography (As well as an extension using a non-visible

More information

ECE 484 Digital Image Processing Lec 10 - Image Restoration I

ECE 484 Digital Image Processing Lec 10 - Image Restoration I ECE 484 Digital Image Processing Lec 10 - Image Restoration I Zhu Li Dept of CSEE, UMKC Office: FH560E, Email: lizhu@umkc.edu, Ph: x 2346. http://l.web.umkc.edu/lizhu slides created with WPS Office Linux

More information

Midterm Examination CS 534: Computational Photography

Midterm Examination CS 534: Computational Photography Midterm Examination CS 534: Computational Photography November 3, 2015 NAME: SOLUTIONS Problem Score Max Score 1 8 2 8 3 9 4 4 5 3 6 4 7 6 8 13 9 7 10 4 11 7 12 10 13 9 14 8 Total 100 1 1. [8] What are

More information

Modeling and Synthesis of Aperture Effects in Cameras

Modeling and Synthesis of Aperture Effects in Cameras Modeling and Synthesis of Aperture Effects in Cameras Douglas Lanman, Ramesh Raskar, and Gabriel Taubin Computational Aesthetics 2008 20 June, 2008 1 Outline Introduction and Related Work Modeling Vignetting

More information

Realistic Image Synthesis

Realistic Image Synthesis Realistic Image Synthesis - HDR Capture & Tone Mapping - Philipp Slusallek Karol Myszkowski Gurprit Singh Karol Myszkowski LDR vs HDR Comparison Various Dynamic Ranges (1) 10-6 10-4 10-2 100 102 104 106

More information

Light. Path of Light. Looking at things. Depth and Distance. Getting light to imager. CS559 Lecture 2 Lights, Cameras, Eyes

Light. Path of Light. Looking at things. Depth and Distance. Getting light to imager. CS559 Lecture 2 Lights, Cameras, Eyes CS559 Lecture 2 Lights, Cameras, Eyes These are course notes (not used as slides) Written by Mike Gleicher, Sept. 2005 Adjusted after class stuff we didn t get to removed / mistakes fixed Light Electromagnetic

More information