Movatterモバイル変換


[0]ホーム

URL:


Skip to main content

Advertisement

Springer Nature Link
Log in

Dual-Projection Based High-Precision Integral Imaging Pickup System

  • Conference paper
  • First Online:

Part of the book series:Lecture Notes in Computer Science ((LNIP,volume 9218))

Included in the following conference series:

  • 1866Accesses

Abstract

In this paper, a high-precision integral imaging (II) pickup system for the real scene is proposed. The dual-projection optical pickup method is utilized to obtain the elemental image array for the II display. The proposed method is robust to the position deviations of the projectors and camera. The calibration of the camera is simplified. Furthermore, the pickup of the II is not limited by the complex optical and mechanical structures. Experimental results show that the proposed system can generate the continuous and tunable parallaxes. With the proposed II pickup and display system, the high-quality 3D images for the real scene can be reconstructed efficiently.

You have full access to this open access chapter, Download conference paper PDF

Similar content being viewed by others

Keywords

1Introduction

Integral imaging (II) as an attractive three-dimensional (3D) technology can reconstruct the autostereoscopic 3D images without glasses and provide both horizontal and vertical parallaxes with continuous views [13]. Basically, the conventional II system consists of the pickup and display sections. In the pickup section, however, there are still some problems such as the limitations of 3D resolution, parallax range, and scene size, which delay the practical application of the II. In the past decades, many researchers have focused on solving these problems, and many technologies have been proposed, including optical pickup method, computer graphic technology, and depth camera based technology.

The conventional optical pickup method using a micro-lens array (MLA) is limited by the scene size, unnecessary beams, and aberrations [47]. It is difficult to capture the 3D information on a real and large-sized 3D scene in practice. The quality of the reconstructed 3D image is reduced because of the limitations imposed by the manufacturing technique used for the MLA. Some researchers replace the MLA with a certain camera array in the pickup part to collect the full-color and high-resolution 3D information of a large-sized real 3D scene [811]. Then the elemental image array (EIA) is generated with the 3D information by pixel mapping algorithms [11,12]. Although, those methods can be applied for the large-sized real scene, they require some complex optical and mechanical structures. Furthermore, the calibration of the camera array is a difficulty when the camera array contains a large number of cameras [9,11].

In recent years, with the development of the computer graphic technology, computer-generated integral imaging (CGII) has been proposed [1216]. In the CGII, however, the limitations of computer graphic technology itself make the capturing of 3D information on real scenes difficult. For the simplification of the II pickup, a great contribution has been made by Chungbuk National University to collect the EIA of the real scene with a depth camera [17]. This method simplifies the pickup process, but it is limited by the accuracy and resolution of the depth camera. The occlusion and the holes in the depth map degrade the quality of the EIA seriously. Some researches combined the optical and computer-generated methods. The dual-camera enabled II pickup system has been proposed [18]. Obviously, the generated parallaxes are limited by the two-view stereoscopic camera and stereo matching algorithms [19].

In this paper, we propose a system to achieve a high-precision II pickup for the real scene. The dual-projection optical pickup (DPOP) method is used to capture the 3D information with no need of complex calibration. This method obtains the more complete reconstructed 3D shape. Then sub-images are generated based on the color texture and depth data of the real scene. The EIA for the II display are generated by interleaving the obtained sub-images. Experimental results verify the usefulness of the proposed system.

2Principle of the Proposed II Pickup System

We achieve the high-precision II pickup system for real scene based on the DPOP method. In the proposed system, a more complete 3D shape of the real scene can be obtained. The continuous and tunable parallaxes, the sub-images, are extracted from the color texture and depth data. Interleaving the sub-images, the high-precision EIA can be obtained for the II display.

The architecture of the proposed system is composed of four parts: the input part including the parameters of the DPOP method and the II display input, the reconstruction part including the reconstruction of the more complete 3D shape and the capture of the corresponding color texture, the EIA generation part including the generation of the sub-images and the pixel mapping for the EIA, and the 3D display part showing the EIA through the MLA for the viewers.

2.1Comparison of the Conventional and Proposed II Pickup System

In the conventional II pickup system, as shown in Fig. 1(a) and (b), the EIA is generated by the MLA or the camera array. In Fig. 1(a), the real scene is captured as elemental images through each micro-lens in the MLA. The size of the scene and the accuracy of the EIA are limited by the parameters of the MLA. The cross-talk effect between neighboring micro-lenses also reduces the quality of reconstructed 3D images [20]. In Fig. 1(b), the camera array is arranged to pick up the real scene. But the camera array needs accurate calibration, operation, and synchronism. Besides, the latency and bandwidth are both the limitations.

Fig. 1.
figure 1

Comparison of EIA generation in the II system (a) by the MLA, (b) by the camera array, and (c) by the proposed DPOP method.

In our proposed system, as shown in Fig. 1(c), we utilize two digital light processing projectors (DLPs) to project the structured light [2124]. The utilization of the dual-projection, DLP1 and DLP2, avoids the error accumulation with the occlusion of a single DLP [25]. In the proposed DPOP method, DLP1 and DLP2 are used for projecting the grating patterns on the real scene, and the deformed patterns are captured by the charge coupled device (CCD). The 3D shape of the real scene is modulated in the deformed patterns. With the phase unwrapping algorithms and the mapping algorithms, the 3D shape can be extracted from the deformed patterns [2629]. But the phase unwrapping algorithms are not suitable for the blind areas, which can cause the error accumulation in the reconstructed 3D shape. So the reconstructed 3D shape may be not complete based on single DLP projection [25]. In the proposed DPOP method, two DLPs project grating patterns from different directions, and the 3D shape is reconstructed with each of the DLPs, respectively. These 3D shapes have some imperfection, and we introduce the fusion and stitching algorithm to obtain the more complete 3D shape.

2.2Reconstruction of 3D Shape for Real Scene by DPOP Method

In this paper, the DPOP method is proposed to obtain the complete 3D shape of the real scene. Two DLPs are utilized to project grating patterns to avoid the blind areas of single DLP. The reconstructed 3D shapes can be fused together completely.

As shown in Fig. 2, the DLP1 and DLP2 are arranged in the front of the real scene and projectN grating patterns.N grating patterns are arranged by a sinusoidal rule. There is an equal 2π/N phase-shifting between the adjacent grating patterns. The CCD captures thej-th deformed patterns from the DLPi (i = 1, 2), and the intensity of the captured deformed pattern is denoted asIi(x,y,j):

$$ I_{i} (x,y,j) = R_{i} (x,y)\left\{ {A_{i} (x,y) + B_{i} (x,y)\cos [\varphi_{i} (x,y) + \sigma_{j} ]} \right\} $$
(1)

wherej = 1, 2,…, andN, andx,y are the pixel coordinates in the captured deformed patterns,Ri(x,y) is the surface reflectance of the real scene,Ai(x,y) represents the background light intensity,Bi(x,y) is the fringe contrast,φi(x,y) indicates the deformed phase modified by the real scene, andσj is the phase-shifting of thej-th deformed pattern.

Fig. 2.
figure 2

Principle of the reconstruction of the 3D shape by the proposed DPOP method.

As the structured light illumination by single DLP [22,29], the truncated phaseφi(x,y) of the deformed phaseφi(x,y) can be deduced as:

$$ \varphi '_{i} (x,y) = \arctan \frac{{\sum\limits_{n = 1}^{N} {I_{i} (x,y,n)\sin (\sigma_{n} )} }}{{\sum\limits_{n = 1}^{N} {I_{i} (x,y,n)\sin (\sigma_{n} )} }}. $$
(2)

According to the inverse trigonometric functions,φi(x,y) has a value in [−π,π). For the continuous phase distributions, the truncated phaseφi(x,y) needs to be unwrapped by the phase unwrapping algorithm [22,28], and the unwrapped phase is denoted asΨi(x,y). Then the phase-changing ΔΨi(x,y) between the real scene and the reference plane can be calculated. And according to the phase-to-height mapping algorithm, the height Δhi(x,y) of the captured real scene can be calculated as follows:

$$ \frac{1}{{\Delta h_{i} (x,y)}} = a_{i} (x,y) + \frac{{b_{i} (x,y)}}{{\Delta \varphi_{i} (x,y)}} + \frac{{c_{i} (x,y)}}{{\Delta \varphi_{i}^{2} (x,y)}}, $$
(3)

whereai(x,y),bi(x,y) andci(x,y) are the mapping parameters, which can be acquired by plane calibrations. After dealing with the deformed patterns information, we can get the height and contour information of the real scene. The height Δhi(x,y) obtained by the single DLPi maybe not complete because of the blind areas. However, the obtained height Δhi(x,y) is simply determined by the real scene, not the measurement system. In other words, the height Δhi(x,y) is independent of the parameters in DPOP method. So the different height Δhi(x,y) can be fused and stitched together to obtain the more complete 3D shape. The fused height ΔH(x,y) can be obtained as:

$$ \Delta H(x,y) = \sum\limits_{i = 1}^{M} {\Delta h_{i} (x_{i} ,y_{i} )} ,\quad (x_{i} ,y_{i} ){ \in \varOmega }_{i} , $$
(4)
$$ \sum\limits_{i = 1}^{M} {\Omega _{i} =\Omega } , $$
(5)

where Ωi represents the pixel regions in which the reconstructed height Δhi(x,y) has no accumulate errors in the phase unwrapping algorithm, and Ω represents the whole pixel region.

2.3Generation Method of Sub-images and EIA for II Display

For the high-precision EIA, we generate the sub-images firstly. The sub-image, which is a collection of the pixels at the same position in every elemental image, has the orthographic projection geometry. In the II display, the sub-images represent a series of directional images. As shown in Fig. 3, the real scene imaged on the EIA plane by the MLA. The parallel rays with the same directional angle θ can be extracted to form an orthographic sub-image [16,30]. The Fig. 3(a) and (b) show the generation geometries of the sub-images and EIAs with the different central depth planes. The pixel information of the sub-images is extracted from the color texture. And the pixel coordinates are decided by the central depth plane and the depth data. The depth data ΔD(x,y) can be transformed from the fused height ΔH(x,y):

$$ \Delta D(x,y) =\Delta H(x,y)\frac{W}{{R_{w} }} =\Delta H(x,y)\frac{H}{{R_{h} }}, $$
(6)

where theW andH are the real width and height of the real scene, and theRw × Rh is the resolution of captured deformed pattern. The Eq. 6 converts the height ΔH(x,y) from the true height to the pixel coordinates, as the depth data ΔD(x,y). In the sub-image, as shown in Fig. 3, the pixel information K is mapping to the pixel coordinate G, and the pixel shifting between K and G is denoted as Δq. According to the geometry shown in Fig. 3, the sub-imageIθ(x,y) for the projecting angleθ can be deduced as:

$$ I_{\theta } (x,y) = T(x +\Delta q_{x} ,y +\Delta q_{y} ), $$
(7)

whereT(x,y) is the pixel information of the color texture, and Δqx and Δqy are the components along thex andy axes of the pixel shifting Δq, respectively. The pixel shifting Δq is depend on the depth data and central depth plane, and can be calculated as:

$$ \Delta q = \left( {\Delta D(x,y) - d_{c} } \right)\tan \theta , $$
(8)

wheredc, described by pixel coordinate, is the distance between the zero plane of the depth data (z = 0) and the central depth plane.

Fig. 3.
figure 3

Geometry of generations for the sub-images and EIAs in the proposed system: (a) and (b) with the different central depth plane.

In the proposed system, the projecting angleθ can be deduced by the parameters of the II display. As shown in Fig. 3, the gap between the MLA and the EIA is g and the interval between the elemental image’s pixel and the centre is Δr. The projecting angleθ can be decided by:

$$ \theta = \arctan \frac{{\Delta r}}{g}. $$
(9)

For the different intervals, the projecting angleθ is different. So the parallaxes are continuous and tunable.

With the sub-images obtained for all projecting angle, the EIA can be generated by interleaving the sub-images based on the viewpoint vector rendering method efficiently [30].

3Experiments and Results

In our experiments, we use two projectors (CB-X24) as the DLP1 and the DLP2 to project N grating patterns. In our experiments,N = 4, so the phase-shiftingσj = 2π/4. The CCD (GM501-H) captures the deformed patterns in 640 × 480 pixels. The generated EIA is displayed on the II pad [15]. The experimental setup is shown in Fig. 4.

Fig. 4.
figure 4

Experimental setup of the proposed II pickup system.

The proposed II pickup system is configured with the specification in Table 1. The distance between the CCD and the DLPs is 0.331 m, and the center depth plane is located at thedc = 0 pixel plane anddc = 130 pixel plane.

Table 1. Configuration parameters and experiment environment of the proposed II system

In our experiments, a “man head” is used as the real scene. We reconstruct the 3D shapes of the head with the deformed patterns (Fig. 5(a) and (c)) by each of DLPs, respectively. As shown in Fig. 5(d) and (e), the 3D shapes are not complete by single DLP. The Fig. 5(e) shows the fused 3D shape in our proposed system. From the profile shown in Fig. 5(d), we can see that the fused 3D shape is complete and no error accumulation.

Fig. 5.
figure 5

Captured deformed patterns and reconstructed 3D shapes in experiments (a) and (c) the deformed patterns by DLP1 and DLP2, (b) and (d) the 3D shapes reconstructed with (a) and (c), (e) the fused 3D shape in the proposed system, and (f) the profile of the fused 3D shape (y = 230).

We generate the sub-images by the proposed method from the depth data and color texture of the head as shown in Fig. 6(a)–(c). The projecting angleθ is continuous and tunable. The EIAs generated with the different central depth planes are shown in Figs. 6(d) and (e).

Fig. 6.
figure 6

Sub-images and generated EIAs by the proposed method (a), (b), and (c) the sub-images, (d) and (e) the EIAs with the different central depth planes.

When the viewer moves in the front of the II display, the reconstructed images from different positions are captured, as shown in Fig. 7.

Fig. 7.
figure 7

Different views of the reconstructed 3D images (a) top view, (b) left view, (c) front view, (d) right view, and (e) bottom view.

4Conclusion

A dual-projection based high-precision II pickup system for the real scene is proposed in this paper. The proposed system takes advantage of the high accuracy of the structured light illumination, and simplifies the optical and mechanical structure for the capturing of real scene. With the high-precision depth data and color texture, the continuous and tunable parallaxes are generated in the experiments. By the proposed II pickup system, the high-quality EIA can be generated efficiently.

References

  1. Lippmann, G.: La photographie integrale. C. R. Acad. Sci.146, 446–451 (1908)

    Google Scholar 

  2. Hong, J., Kim, Y., Choi, H.J., Hahn, J., Park, J.H., Kim, H., Min, S.W., Chen, N., Lee, B.: Three-dimensional display technologies of recent interest: principles, status, and issues. Appl. Opt.50(34), H87–H115 (2011)

    Article  Google Scholar 

  3. Xiao, X., Javidi, B., Martinez-Corral, M., Stern, A.: Advances in three-dimensional integral imaging: sensing, display, and applications [Invited]. Appl. Opt.52(4), 546–560 (2013)

    Article  Google Scholar 

  4. Okano, F., Hoshino, H., Arai, J., Yuyama, I.: Real-time pickup method for a three-dimensional image based on Integral Photography. Appl. Opt.36(7), 1598–1603 (1997)

    Article  Google Scholar 

  5. Jang, J.S., Javidi, B.: Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics. Opt. Lett.27(5), 324–326 (2002)

    Article  Google Scholar 

  6. Yoo, H.: Axially moving a lenslet array for high-resolution 3D images in computational integral imaging. Opt. Express21(7), 8873–8878 (2013)

    Article  Google Scholar 

  7. Arai, J., Okui, M., Yamashita, T., Okano, F.: Integral three-dimensional television using a 2000-scanning-line video system. Appl. Opt.45(8), 1704–1712 (2006)

    Article  Google Scholar 

  8. Xu, Y., Wang, X.R., Sun, Y., Zhang, J.Q.: Homogeneous light field model for interactive control of viewing parameters of integral imaging displays. Opt. Express20(13), 14137–14151 (2012)

    Article  Google Scholar 

  9. Sang, X.Z., Fan, F.C., Jiang, C.C., Choi, S., Dou, W.H., Yu, C., Xu, D.: Demonstration of a large-size realtime full-color three-dimensional display. Opt. Lett.34(24), 3803–3805 (2009)

    Article  Google Scholar 

  10. Moon, I., Javidi, B.: Three-dimensional recognition of photon-starved events using computational integral imaging and statistical sampling. Opt. Lett.34(6), 731–733 (2009)

    Article  Google Scholar 

  11. Navarro, H., Dorado, A., Saavedra, G., Llavador, A., Martínez-Corral, M., Javidi, B.: Is it worth using an array of cameras to capture the spatio-angular information of a 3D scene or is it enough with just two? In: Proceedings of SPIE vol. 8384, pp. 838406–838406-7 (2012)

    Google Scholar 

  12. Liao, H., Iwahara, M., Hata, N., Dohi, T.: High-quality integral videography using a multiprojector. Opt. Express12(6), 1067–1076 (2004)

    Article  Google Scholar 

  13. Igarashi, Y., Murata, H., Ueda, M.: 3D display system using a computer generated integral photography. Jpn. J. Appl. Phys.17(9), 1683–1684 (1978)

    Article  Google Scholar 

  14. Jang, Y.H., Park, C., Jung, J.S., Park, J.H., Kim, N., Ha, J.S., Yoo, K.H.: Integral imaging pickup method of bio-medical data using GPU and Octree. J. Korea Contents Assoc.10(6), 1–9 (2010)

    Article  Google Scholar 

  15. Ji, C.C., Luo, C.G., Deng, H., Li, D.H., Wang, Q.H.: Tilted elemental image array generation method for moiré-reduced computer generated integral imaging display. Opt. Express21(17), 19816–19824 (2013)

    Article  Google Scholar 

  16. Kwon, K.C., Park, C., Erdenebat, M.U., Jeong, J.S., Choi, J.H., Kim, N., Park, J.H., Lim, Y.T., Yoo, K.H.: High speed image space parallel processing for computer-generated integral imaging system. Opt. Express20(2), 732–740 (2012)

    Article  Google Scholar 

  17. Li, G., Kwon, K.C., Shin, G.H., Jeong, J.S., Yoo, K.H., Kim, N.: Simplified integral imaging pickup method for real objects using a depth camera. J. Opt. Soc. Korea16(4), 381–385 (2012)

    Article  Google Scholar 

  18. Jiao, X.X., Zhao, X., Yang, Y., Fang, Z.L., Yuan, X.C.: Dual-camera enabled real-time three-dimensional integral imaging pick-up and display. Opt. Express20(25), 27304–27311 (2012)

    Article  Google Scholar 

  19. Cooperation Stereo Vision.http://www.cs.cmu.edu/clz/stereo.html

  20. Kavehvash, Z., Mehrany, K., Bagheri, S.: Optimization of the lens-array structure for performance improvement of integral imaging. Opt. Lett.36(20), 3993–3995 (2011)

    Article  Google Scholar 

  21. Srinivasan, V., Liu, H.C., Halioua, M.: Automated phase-measuring profilometry of 3-D diffuse objects. Appl. Opt.23(18), 3105–3108 (1984)

    Article  Google Scholar 

  22. Kim, E.H., Hahn, J., Kim, H., Lee, B.: Profilometry without phase unwrapping using multi-frequency and four-step phase-shift sinusoidal fringe projection. Opt. Express17(10), 7818–7830 (2009)

    Article  Google Scholar 

  23. Kim, J., Jung, J.H., Jang, C., Lee, B.: Real-time capturing and 3D visualization method based on integral imaging. Opt. Express21(16), 18742–18753 (2013)

    Article  Google Scholar 

  24. Schaffer, M., Grosse, M., Kowarschik, R.: High-speed pattern projection for three-dimensional shape measurement using laser speckles. Appl. Opt.49(18), 3622–3629 (2010)

    Article  Google Scholar 

  25. Su, L., Su, X., Li, W., Xiang, L.: Application of modulation measurement profilometry to objects with surface holes. Appl. Opt.38(7), 1153–1158 (1999)

    Article  Google Scholar 

  26. Ou, P., Li, B., Wang, Y., Zhang, S.: Flexible real-time natural 2D color and 3D shape measurement. Opt. Express21(14), 16736–16741 (2013)

    Article  Google Scholar 

  27. Zhang, S., Van Der Weide, D., Oliver, J.: Superfast phase-shifting method for 3-D shape measurement. Opt. Express18(9), 9684–9689 (2010)

    Article  Google Scholar 

  28. Liu, K., Wang, Y., Lau, D.L., Hao, Q., Hassebrook, L.G.: Dual-frequency pattern scheme for high-speed 3-D shape measurement. Opt. Express18(5), 5229–5244 (2010)

    Article  Google Scholar 

  29. Xu, Y., Jia, S., Bao, Q., Chen, H., Yang, J.: Recovery of absolute height from wrapped phase maps for fringe projection profilometry. Opt. Express22(14), 16819–16828 (2014)

    Article  Google Scholar 

  30. Park, K.S., Min, S.W., Cho, Y.: Viewpoint vector rendering for efficient elemental image generation. IEICE Trans. Inf. Syst.E 90-D, 233–241 (2007)

    Article  Google Scholar 

  31. Kang, H.H., Lee, J.H., Kim, E.S.: Enhanced compression rate of integral images by using motion-compensated residual images in three-dimensional integral-imaging. Opt. Express20(5), 5440–5459 (2012)

    Article MathSciNet  Google Scholar 

  32. Lee, J.J., Shin, D.H., Lee, B.G.: Simple correction method of distorted elemental images using surface markers on lenslet array for computational integral imaging reconstruction. Opt. Express17(20), 18026–18037 (2009)

    Article  Google Scholar 

Download references

Acknowledgment

The work is supported by the NSFC under Grant Nos. 61225022 and 61320106015, the “973” Program under Grant No. 2013CB328802, and the “863” Program under Grant No. 2015AA015902.

Author information

Authors and Affiliations

  1. School of Electronics and Information Engineering, Sichuan University, Chengdu, 610065, China

    Zhao-Long Xiong, Qiong-Hua Wang, Huan Deng & Yan Xing

Authors
  1. Zhao-Long Xiong

    You can also search for this author inPubMed Google Scholar

  2. Qiong-Hua Wang

    You can also search for this author inPubMed Google Scholar

  3. Huan Deng

    You can also search for this author inPubMed Google Scholar

  4. Yan Xing

    You can also search for this author inPubMed Google Scholar

Corresponding author

Correspondence toQiong-Hua Wang.

Editor information

Editors and Affiliations

  1. Department of Electronic Engineering, Tsinghua University, Beijing, China

    Yu-Jin Zhang

Rights and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Xiong, ZL., Wang, QH., Deng, H., Xing, Y. (2015). Dual-Projection Based High-Precision Integral Imaging Pickup System. In: Zhang, YJ. (eds) Image and Graphics. ICIG 2015. Lecture Notes in Computer Science(), vol 9218. Springer, Cham. https://doi.org/10.1007/978-3-319-21963-9_19

Download citation

Publish with us

Societies and partnerships


[8]ページ先頭

©2009-2025 Movatter.jp