Research Trends in Spatial Imaging 3D Video

Similar documents
Integral three-dimensional display with high image quality using multiple flat-panel displays

Integral 3-D Television Using a 2000-Scanning Line Video System

doi: /

Chapter 29: Light Waves

Chapter 23 Study Questions Name: Class:

Chapter 18 Optical Elements

Following the path of light: recovering and manipulating the information about an object

PhysFest. Holography. Overview

LOS 1 LASER OPTICS SET

Parallel Digital Holography Three-Dimensional Image Measurement Technique for Moving Cells

Stereoscopic Hologram

Analysis of retinal images for retinal projection type super multiview 3D head-mounted display

Elemental Image Generation Method with the Correction of Mismatch Error by Sub-pixel Sampling between Lens and Pixel in Integral Imaging

VISUAL PHYSICS ONLINE DEPTH STUDY: ELECTRON MICROSCOPES

Compensation of hologram distortion by controlling defocus component in reference beam wavefront for angle multiplexed holograms

Chapter 2 - Geometric Optics

Recording and reconstruction of holograms

OPTICS DIVISION B. School/#: Names:

Converging Lenses. Parallel rays are brought to a focus by a converging lens (one that is thicker in the center than it is at the edge).

Test Review # 8. Physics R: Form TR8.17A. Primary colors of light

EE119 Introduction to Optical Engineering Spring 2002 Final Exam. Name:

The Optics of Mirrors

Chapter Ray and Wave Optics

The topics are listed below not exactly in the same order as they were presented in class but all relevant topics are on the list!

Applications of Optics

Study of self-interference incoherent digital holography for the application of retinal imaging

A high-resolution fringe printer for studying synthetic holograms

Subtractive because upon reflection from a surface, some wavelengths are absorbed from the white light and subtracted from it.

Laser Scanning 3D Display with Dynamic Exit Pupil

R.B.V.R.R. WOMEN S COLLEGE (AUTONOMOUS) Narayanaguda, Hyderabad.

CSC Stereography Course I. What is Stereoscopic Photography?... 3 A. Binocular Vision Depth perception due to stereopsis

Laboratory 7: Properties of Lenses and Mirrors

Physics 3340 Spring 2005

Design of a digital holographic interferometer for the. ZaP Flow Z-Pinch

Light and Applications of Optics

Enhanced field-of-view integral imaging display using multi-köhler illumination

SUPPLEMENTARY INFORMATION

Exam 4. Name: Class: Date: Multiple Choice Identify the choice that best completes the statement or answers the question.

Vision. The eye. Image formation. Eye defects & corrective lenses. Visual acuity. Colour vision. Lecture 3.5

Optically-corrected elemental images for undistorted Integral image display

Holographic 3D imaging methods and applications

From birth to present of hologram.

2 Outline of Ultra-Realistic Communication Research

Complete the diagram to show what happens to the rays. ... (1) What word can be used to describe this type of lens? ... (1)

Focus-Aid Signal for Super Hi-Vision Cameras

GEOMETRICAL OPTICS Practical 1. Part I. BASIC ELEMENTS AND METHODS FOR CHARACTERIZATION OF OPTICAL SYSTEMS

Holography. Introduction

Color electroholography by three colored reference lights simultaneously incident upon one hologram panel

Relay optics for enhanced Integral Imaging

Gerhard K. Ackermann and Jurgen Eichler. Holography. A Practical Approach BICENTENNIAL. WILEY-VCH Verlag GmbH & Co. KGaA

Name. Light Chapter Summary Cont d. Refraction

PHYS 202 OUTLINE FOR PART III LIGHT & OPTICS

Exp No.(8) Fourier optics Optical filtering

LlIGHT REVIEW PART 2 DOWNLOAD, PRINT and submit for 100 points

SUBJECT: PHYSICS. Use and Succeed.

LEOK-3 Optics Experiment kit

25 cm. 60 cm. 50 cm. 40 cm.

Final Reg Optics Review SHORT ANSWER. Write the word or phrase that best completes each statement or answers the question.

Week IV: FIRST EXPERIMENTS WITH THE ADVANCED OPTICS SET

Image Formation. Light from distant things. Geometrical optics. Pinhole camera. Chapter 36

E X P E R I M E N T 12

Holography as a tool for advanced learning of optics and photonics

Extended depth-of-field in Integral Imaging by depth-dependent deconvolution

Basic Principles of the Surgical Microscope. by Charles L. Crain

Lecture 21. Physics 1202: Lecture 21 Today s Agenda

Unit 8: Light and Optics

Optics and Lasers. Matt Young. Including Fibers and Optical Waveguides

Recent advancements in photorefractive holographic imaging

Test Review # 9. Physics R: Form TR9.15A. Primary colors of light

Copyright 2009 SPIE and IS&T. This paper was (will be) published in Proceedings Electronic Imaging 2009 and is made available as an electronic

Phys214 Fall 2004 Midterm Form A

COPYRIGHTED MATERIAL. Overview

COPYRIGHTED MATERIAL OVERVIEW 1

AQA P3 Topic 1. Medical applications of Physics

360 -viewable cylindrical integral imaging system using a 3-D/2-D switchable and flexible backlight

Mirrors and Lenses. Images can be formed by reflection from mirrors. Images can be formed by refraction through lenses.

Diffraction, Fourier Optics and Imaging

Physics 1520, Spring 2013 Quiz 2, Form: A

Name Class Date. Use the terms from the following list to complete the sentences below. Each term may be used only once. Some terms may not be used.

EE119 Introduction to Optical Engineering Spring 2003 Final Exam. Name:

A simple and effective first optical image processing experiment

Using double-exposure holographic techniques to evaluate the deformation of an aluminum can under stress

GIST OF THE UNIT BASED ON DIFFERENT CONCEPTS IN THE UNIT (BRIEFLY AS POINT WISE). RAY OPTICS

A novel tunable diode laser using volume holographic gratings

APPLICATION OF A POINT-DIFFRACTION INTERFEROMETER TO UNSTEADY SHOCK WAVE PHENOMENA

Reading: Lenses and Mirrors; Applications Key concepts: Focal points and lengths; real images; virtual images; magnification; angular magnification.

CHAPTER 3LENSES. 1.1 Basics. Convex Lens. Concave Lens. 1 Introduction to convex and concave lenses. Shape: Shape: Symbol: Symbol:

GRADE 11-LESSON 2 PHENOMENA RELATED TO OPTICS

REFLECTION THROUGH LENS

Unit 2: Optics Part 2

30 Lenses. Lenses change the paths of light.

Fiber Optic Communications

arxiv: v1 [physics.optics] 2 Nov 2012

Measurement of Surface Profile and Layer Cross-section with Wide Field of View and High Precision

4-2 Image Storage Techniques using Photorefractive

Home Lab 5 Refraction of Light

Section 3. Imaging With A Thin Lens

Chapter 36: diffraction

Optical Information Processing. Adolf W. Lohmann. Edited by Stefan Sinzinger. Ch>

lll lll a lldl DID lll DIII DD llll uui lll DIV 1101 lll ld ll Dl lli

Transcription:

Research Trends in Spatial Imaging 3D Video Spatial image reproduction 3D video (hereinafter called spatial image reproduction ) is able to display natural 3D images without special glasses. Its principles lie in integral photography and holography, and research into using it for video has grown with recent developments in imaging technology. STRL is especially interested in using spatial image reproduction for 3D television. One big issue is the huge amount of information that must be handled in order to digitize (and consequently transmit and display) such images. In this article, we explain the theory behind integral imaging and holography and overview the research being conducted on digitizing them. 1. Introduction Television has evolved from black-and-white, to color and Hi-Vision, and there is a demand for even more realism. As clear evidence of this demand, 8K Super Hi-Vision (8K) will start test broadcasts in 2016. 8K video conveys a strong sense of reality, giving the impression that the viewer is actually there. However, conventional forms of television have always shown flat (2D) images, and 8K is no different. Thus, for most people who generally see things in 3D, televisions that display images in 3D provide a new level of realism. Historically, people have long had an interest in 3D images; the stereoscope 1), which is based on the same principles as current 3D movies and TV, was invented in 1838. Regarding spatial image reproduction, which is considered the ideal 3D display method because it does not need special glasses and produces an image in the space in front of the viewer, integral photography 2) dates back to 1906, and holography 3) was invented in 1948. As such, many of the 3D display methods available to us today began as 3D photography technologies long ago. More recently, attempts have been made to record display moving spatial images using electronic devices (something that would be impossible to do with photographic plates). In this article, we introduce the research on applying integral imaging and holography to television. 2. Types of Spatial Image Reproduction When we look at an object, the light reflecting from it ( object light ) enters the eye, passes through the lens, and forms an image on the retina. Thus, even if an object is not present, if light that is equivalent to the object light could somehow be produced and someone looked at it, it would appear as though the object were actually there. Spatial imaging is based on this idea and involves recording and playing back this object light. As shown in Figure 1, the object light produced from display enters the viewer s eyes, and the object appears to be where it was when the object light was recorded. These light produced from display Display screen Viewer Spatial image composed by object light Figure 1: Spatial image reproduction 3

are the same conditions as when actually viewing the object, so no special glasses are needed, and the image changes depending on the viewpoint. The eyes are also able to focus on the image as if it were a normal object. For these reasons, spatial imaging is considered to be an ideal 3D format. There are two types of spatial imaging: integral imaging and holography. Integral imaging is based on a technique called integral photography, which was first proposed by Lippmann 2). The technique involves using a flat lens array consisting of many small lenses for both recording and reproducing the image, and it works under natural light. On the other hand, holography, invented by Gabor 3), uses refraction and interference to record and reproduce light. It requires highly coherent light such as laser light, but in theory, it can record and play back the recorded light perfectly. In the following sections, we describe the theory of these methods, issues in using them for displaying motion, and research aimed at creating practical technologies. 3. Integral Imaging 3.1 Theory Integral imaging is based on the same principle as the 3D photographic technique called integral photography. Integral photography uses a flat lens array composed of many convex lenses that sample the object light in the direction of propagation, and this group of light rays is recorded and then reproduced. A small image (elemental image) is recorded on the recording and playback media (e.g. photographic plate), as shown in Figure 2(a). For reproduction, the lens array is placed in front of a developed photographic plate showing the recorded elemental images and the relation between the positions of the display and the array are the reverse of those in recording, as shown in Figure 2(b). The diffuse light from the projector travels through each elemental image, changing its luminance, and the elemental lens changes its direction of propagation, creating light rays in the opposite direction as those incident on the photographic plate when the image was recorded. These light rays form an image at the position of the object. We perceive this optical image as a 3D object, but as shown in Figure 2, the capture direction and viewing direction are opposite, so the depth appears reversed. This issue can be resolved by rotating each elemental image 180 degrees around the center of the image 4). 3.2 Digital Integral Imaging Research Trends To apply the integral technique to 3D television, our laboratory has proposed an integral 3D television 4) able to record and play back the object light in real time by replacing the photographic plates in Figure 2 with electronic devices, namely a television camera and display. This system avoids the phenomenon of reversing the relief in the 3D image by using a lens array composed of optical fiber lenses (gradient refractive index lenses), which have the property that the refractive index decreases when moving from the optical axis to the periphery, to optically rotate each elemental image by 180 degrees 5). To increase the depth range the system can reproduce, the pixel pitch of the display must be reduced. Also, the number of elemental lenses, which each correspond to one pixel of the 3D image, must be increased to increase the resolution of the 3D image. Thus, to show a 3D image with high resolution and a wide depth range, the display needs to have a small pixel pitch and many pixels. Our laboratory began research on integral 3D television in the late 1990s, and over the years has built prototypes of increasing resolution. At NHK STRL Elemental image Lens array Elemental lens Diffuse light Optical image of the object Viewer Capture direction Photographic plate Observation direction (a) Recording b Reproduction Figure 2: Integral photography 4

Feature Capture lens array Screen Display lens array 8,000 scan-line camera 8,000 scan-line projector 3D Image (a) Configuration (a) Configuration Above Left Right Below (b) Display equipment (c) Images reproduced when viewed from above, below, left and right Figure 3: Integral 3D television Open House in 2011, we demonstrated an integral 3D television 6) (Figure 3) using 8K Super Hi-Vision (8K) projectors and pixel-shift technology *1, the 8,000 scan lines of the system had a resolution equivalent to 15,360 8640 pixels. This 3D television could display 3D images consisting of approximately 100,000 pixels with motion parallax in all directions. To use the above technology for 3D television broadcasts in the future, the quality of the 3D images must be increased. In particular, the video system on which it would be based needs to have more pixels, but there are limitations to systems using a single video device like the ones we have had till now. Accordingly, we have been studying how to increase the number of pixels by combining multiple cameras and display devices. *1 A technology in which two green display elements are offset diagonally by half of a pixel from each other and the displayed images are merged to produce effectively twice the resolution horizontally and vertically. It can also be used to image capture elements. For recording, we have prototyped capture equipment that combines seven high-definition cameras and have confirmed that images with a 2.5-times larger viewing zone *2 both vertically and horizontally can be recorded 7). We also prototyped display equipment 8) with four times the number of pixels by using a lens array and convex lens (Figure 4) to optically magnify and join the images from four LCD panels. We are also studying a method for generating 3D images computationally from depth information about the object, for use when the object light is difficult to record using a lens array, such as when it is far away or very large. The method photographs the object with multiple cameras at different locations, generates a 3D model of the object from these images, and then computes the set of elemental images for integral imaging from the 3D model 9). In theory, the focal accommodation of the eyes is active *2 The range of angles through which the 3D image can be seen. 5

Magnifying optical system Reproduced image Four display devices Lens array (dotted red line shows display area for one device) Image made by combining four images (a) Display equipment (b) Reproduced image Figure 4: Integral 3D display equipment composed of multiple display devices in integral imaging. We measured the eye accommodation when people viewed 3D images produced by our prototype integral imaging display equipment. To eliminate the factor of convergence *3 -induced accommodation, we measured the accommodation response in cases where the integral 3D image was viewed with only one eye. As a result, we obtained data indicating that the participant s focus adjusted as the object position moved in the depth direction when viewing the integral images with one eye, the same as when viewing the real object 10). 4. Holography 4.1 Theory Holography uses the properties of diffraction and interference of light to record and reproduce images. Holography normally uses laser light and so is done in a dark room. As shown in Figure 5(a), the laser light is divided into two beams, one which illuminates the object, the other which illuminates the photographic plate. At the photographic plates, the object light reflected from the object and the light illuminating the media directly (reference light) interfere and a light and dark pattern (interference pattern) is recorded on the photographic plate. This record of the interference pattern is called a hologram. When reproducing the image, the hologram is illuminated with light having the same characteristics as the reference beam used when recording (the reference light) and from the same direction, as shown in Figure 5(b). The reference light is refracted by the interference pattern recorded in the hologram, creating light that is the same as the object light and forming an optical *3 When a person is looking at something, the lines of sight from both eyes cross it. image of the object as it was. A viewer can see the image as a solid body without using any special glasses. However, undesired forms of light, including transmitted and conjugate light, are produced as well. Transmitted light is from the illumination, and it passes through the hologram unchanged. Conjugate light refers to phaseconjugate light, for which the phase components are inverted with respect to the object light. 4.2 Electronic Holography Research Trends To employ holography in a 3D display capable of showing motion, the interference pattern must be rewritten with the same frequency as the display. An electronic holography has been improved by using an electronic spatial light modulator such as an LCD panel instead of the photographic plate for the hologram. Photographic plates used for holograms have extremely high resolutions of several thousand lines/mm, but even the best electronic displays only have about 100 lines/ mm. For this reason, the diffraction angles obtainable with electronic displays are small compared with photographic plates, so the range over which the 3D image can be viewed (the viewing zone) is narrow, making binocular 3D viewing difficult. Also, the angles between the object light, conjugate light, and transmitted light are small, and they all reach the observers eyes, causing interference. At present, the electronic displays with high enough resolution are small, at only several inches diagonally, so only small 3D images can be reproduced. Methods have been proposed for dealing with these issues, including restricting the spread of the light for recording and playback to eliminate interfering light 11) and using high-order diffracted light produced by the pixel structure of the electronic display to expand the viewing zone 12). However, the real way to resolve these 6

Feature Observer Dark room Half mirror Mirror Light equivalent to object light Laser beam Photographic plate light Photographic plate (hologram) Mirror Reference beam (a) Recording Reference Beam (b) Playback Optical image of object Figure 5: Holography issues is to reduce the pixel pitch of the display, thereby increasing the diffraction angles. As a way of reducing the pixel pitch in electronic displays, our laboratory has proposed the Spin-Spatial Light Modulator (spin-slm) 13), which combines the magneto-optical effect of magnetic materials *4 with spininjection magnetic reversal, wherein electric current is used to reverse the direction of magnetization of a magnetic material. Existing LCD displays have pixel pitches no smaller than about 5 μm, yielding diffraction angles of about 7 degrees (for a light wavelength of 632.8 nm), but we hope to achieve a pixel pitch as small as 1 μm and diffraction angles of about 39 degrees with the spin- SLM. For details regarding the spin SLM, see the articles, Research Trends on Holographic 3D Display Devices in this issue. To expand the area for displaying interference patterns, we have developed a system 14) for combining electronic displays in a tiled pattern. With this system, we combined the interference patterns of 16 4K LCD panels (of diagonal size 21.1 mm) after first optically magnifying them to eliminate the frames around the perimeter of each panel. In this way, we built an 85-mmdiagonal 3D display. To display an interference pattern on an electronic display for electronic holography, the interference pattern has to be turned into numeric data. One method is to capture the interference pattern directly using electronic image capture devices 15). Although this can *4 A phenomenon in which the polarization of light changes when it is reflected off a magnetized material. produce high-quality interference patterns, it needs to use laser light and a darkroom; hence, it can t be used to record large objects. If interference pattern data could be obtained under natural light, the restrictions on subject matter due to use of laser light would be reduced, and the applicability of holography for 3D television would increase. As such, methods have been proposed for obtaining (generating) interference pattern data without using laser light, e.g., using depth information from the object to compute the light propagation paths and generate the interference pattern 16). Another way is to capture images using integral imaging, simulate integral image reproduction to compute the object light, and use it to generate the interference pattern 17). A color electronic holography system using the latter method has been developed, and it is able to record and play back a moving object in real time 18)19) (Figure 6). This color electronic holography system limits the location of the interference pattern to the focal plane of the lens array, thereby reducing the time required to generate the pattern 20), and it generates three interference patterns (red, green, and blue) at 30 frames/s from elemental images captured using a high definition (HD) television camera. The generated interference patterns are displayed on separate LCD panels, illuminated with laser light of the corresponding color, and the reproduced object light is combined to display color 3D video. 5. Conclusion We have described theory and research activities related to two spatial imaging methods: integral imaging 7

Photographic equipment Conversion equipment Display equipment Blue laser Red laser LCD panel Mirror Optical filter HD camera Multi-viewpoint image data Hologram data Optical filter Half mirror Personal computer Green laser Convex lens Reproduced image Compound lens Specialized camera Condensor lens Viewing angle: 2 deg. (a) Configuration (b) Sample reproduced image (size approx. 1 cm) Figure 6: Color electronic holography (from reference 19) ) and holography. These methods can produce natural 3D displays that do not place a burden on human vision and may be suitable for showing 3D television in a wide range of viewing environments. However, compared with conventional video systems that capture 2D images from a single viewpoint, spatial imaging methods must record much more light information, even all of it in the case of holography. For example, integral imaging equivalent to the system in reference 6) produces light rays in 38 directions horizontally and 38 directions vertically, so to display a 3D image with a quality equivalent to that of a standard resolution TV with 800 450 pixels would require 800 450 38 38 pixels. This is approximately 16 times that of 8K Super Hi-Vision, which is 7,680 4,320 pixels. To reproduce 3D images with even more depth, even more light rays at higher density must be produced, further increasing the number of pixels required. To implement spatial imaging, this huge amount of information must somehow be recorded, transmitted, and played back. The performance of current imaging devices for recording and reproduction is clearly inadequate, and devices with narrower pixel pitches and more pixels will be needed. It will take time to develop devices with the required performance. To address these issues, our laboratory is studying how to increase the number of pixels by combining multiple devices 7)8) and is conducting research on narrowing the pixel pitch with the new spin SLM display 13). Regarding transmission, technologies to compress the 3D information to a practical and transmittable volume need to be developed, and we have begun studying this as well. We are conducting research with the goal of developing practical 3D television around 2030. In particular, it will be important to improve the resolution of 3D images in the depth direction. The quality of the 3D image greatly 8

affects system design factors such as system scale and volume of information. We will set device and system development targets in stages and proceed with our research by taking these issues into consideration and incorporating technical developments that occur around the world. (Tomoyuki Mishina) References 1) C. Wheatstone: Contributions to the Physiology of Vision. Part the First. On Some Remarkable, and Hitherto Unobserved, Phenomena of Binocular Vision, Philosophical Transactions of the Royal Society of London, Vol. 128, pp. 371-394 (1838) 2) M. G. Lippmann: Épreuves, Réversibles Donnant la Sensation du Relief, J. Phys., Vol. 4, pp. 821-825 (1908) 3) D. Gabor: A New Microscopic Principle, Nature, Vol. 161, pp. 777-778 (1948) 4) F. Okano, H. Hoshino, J. Arai and I. Yuyama: Realtime Pickup Method for a Three-dimensional Image Based on Integral Photography, Appl. Opt., Vol. 36, No. 7, pp. 1598-1603 (1997) 5) J. Arai, F. Okano, H. Hoshino and I. Yuyama: Gradient-index Lens-array Method Based on Realtime Integral Photography for Three-dimensional Images, Appl. Opt., Vol. 37, No. 11, pp. 2034-2045 (1998) 6) J. Arai, M. Kawakita, T. Yamashita, H. Sasaki, M. Miura, H. Hiura, M. Okui and F. Okano: Integral Three-dimensional Television with Video System Using Pixel-offset Method, Optics Express, Vol. 21, No. 3, pp. 3474-3485 (2013) 7) M. Miura, N. Okaichi, J. Arai and T. Mishina: Integral Three-dimensional Capture System with Enhanced Viewing Angle by Using Camera Array, Proc. SPIE, Vol. 9391, 93914 (2015) 8) N. Okaichi, M. Miura, J. Arai and T. Mishina: Integral 3D Display Using Multiple LCDs, Proc. SPIE, Vol. 9391, 939134 (2015) 9) K. Ikeya, K. Hisatomi, M. Katayama, Y. Iwadate: Integral 3D Contents Production from Multi-View Images: 3D Modeling and 3D Image Conversion of Sports Scene, ITE Journal, Vol. 67, No. 7, pp. J229-J240 (2013) (in Japanese) 10) H. Hiura, T. Mishina, J. Arai and Y. Iwadate: Accommodation Response Measurements for Integral 3D Image, Proc. SPIE, Vol. 9011, 9011-48 (2014) 11) T. Mishina, F. Okano and I. Yuyama: Timealternating Method Based on Single-sideband Holography with Half-zone-plate Processing for the Enlargement of Viewing Zones, Appl. Opt., Vol. 38, No. 17, pp. 3703-3713 (1999) 12) T. Mishina, M. Okui and F. Okano: Viewing-zone Enlargement Method for Sampled Hologram that Uses High-order Diffraction, Appl. Opt., Vol. 41, No. 8, pp. 1489-1499 (2002) 13) K. Aoshima, N. Funabashi, K. Machida, Y. Miyamoto, K. Kuga, T. Ishibashi, N. Shimidzu and F. Sato: Submicron Magneto-Optical Spatial Light Modulation Device for Holographic Displays Driven by Spin-Polarized Electrons, J. Display Tech., Vol. 6, No. 9, pp. 374-380 (2010) 14) H. Sasaki, K. Yamamoto, K. Wakunami, Y. Ichihashi, R. Oi and T. Senoh: Large Size Three-dimensional Video by Electronic Holography Using Multiple Spatial Light Modulators, Sci. Rep., 4, 6177 (2014) 15) N. Hashimoto, S. Morokawa and K. Kitamura, Realtime Holography Using the Highresolution LCTV- SLM, Proc. SPIE, Vol. 1461, pp. 291-302 (1991) 16) R. Oi, M. Okui: A Method for Displaying a Realscene Fresnel Hologram, ITE Journal, Vol. 30, No. 43, pp. 3-6 (2006) (in Japanese) 17) T. Mishina, M. Okui and F. Okano: Calculation of Holograms from Elemental Images Captured by Integral Photography, Appl. Opt., Vol. 45, No. 17, pp. 4026-4036 (2006) 18) K. Yamamoto, T. Mishina, R. Oi, T. Senoh and M. Okui, A Real-time Color Holography System for Live Scene, Proc. SPIE, Vol. 7233, 723310 (2009) 19) http://www.nict.go.jp/press/2008/press-20081117. pdf (in Japanese) 20) R. Oi, Y. Mishina, M. Okui, Y. Nojiri, F. Okano: A Fast Hologram Calculation Method for Real s, ITE Journal, Vol. 61, No. 2, pp. 198-203 (2007) (in Japanese) 9