Part of the book series:Lecture Notes in Computer Science ((LNIP,volume 9218))
Included in the following conference series:
1866Accesses
Abstract
In this paper, a high-precision integral imaging (II) pickup system for the real scene is proposed. The dual-projection optical pickup method is utilized to obtain the elemental image array for the II display. The proposed method is robust to the position deviations of the projectors and camera. The calibration of the camera is simplified. Furthermore, the pickup of the II is not limited by the complex optical and mechanical structures. Experimental results show that the proposed system can generate the continuous and tunable parallaxes. With the proposed II pickup and display system, the high-quality 3D images for the real scene can be reconstructed efficiently.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1Introduction
Integral imaging (II) as an attractive three-dimensional (3D) technology can reconstruct the autostereoscopic 3D images without glasses and provide both horizontal and vertical parallaxes with continuous views [1–3]. Basically, the conventional II system consists of the pickup and display sections. In the pickup section, however, there are still some problems such as the limitations of 3D resolution, parallax range, and scene size, which delay the practical application of the II. In the past decades, many researchers have focused on solving these problems, and many technologies have been proposed, including optical pickup method, computer graphic technology, and depth camera based technology.
The conventional optical pickup method using a micro-lens array (MLA) is limited by the scene size, unnecessary beams, and aberrations [4–7]. It is difficult to capture the 3D information on a real and large-sized 3D scene in practice. The quality of the reconstructed 3D image is reduced because of the limitations imposed by the manufacturing technique used for the MLA. Some researchers replace the MLA with a certain camera array in the pickup part to collect the full-color and high-resolution 3D information of a large-sized real 3D scene [8–11]. Then the elemental image array (EIA) is generated with the 3D information by pixel mapping algorithms [11,12]. Although, those methods can be applied for the large-sized real scene, they require some complex optical and mechanical structures. Furthermore, the calibration of the camera array is a difficulty when the camera array contains a large number of cameras [9,11].
In recent years, with the development of the computer graphic technology, computer-generated integral imaging (CGII) has been proposed [12–16]. In the CGII, however, the limitations of computer graphic technology itself make the capturing of 3D information on real scenes difficult. For the simplification of the II pickup, a great contribution has been made by Chungbuk National University to collect the EIA of the real scene with a depth camera [17]. This method simplifies the pickup process, but it is limited by the accuracy and resolution of the depth camera. The occlusion and the holes in the depth map degrade the quality of the EIA seriously. Some researches combined the optical and computer-generated methods. The dual-camera enabled II pickup system has been proposed [18]. Obviously, the generated parallaxes are limited by the two-view stereoscopic camera and stereo matching algorithms [19].
In this paper, we propose a system to achieve a high-precision II pickup for the real scene. The dual-projection optical pickup (DPOP) method is used to capture the 3D information with no need of complex calibration. This method obtains the more complete reconstructed 3D shape. Then sub-images are generated based on the color texture and depth data of the real scene. The EIA for the II display are generated by interleaving the obtained sub-images. Experimental results verify the usefulness of the proposed system.
2Principle of the Proposed II Pickup System
We achieve the high-precision II pickup system for real scene based on the DPOP method. In the proposed system, a more complete 3D shape of the real scene can be obtained. The continuous and tunable parallaxes, the sub-images, are extracted from the color texture and depth data. Interleaving the sub-images, the high-precision EIA can be obtained for the II display.
The architecture of the proposed system is composed of four parts: the input part including the parameters of the DPOP method and the II display input, the reconstruction part including the reconstruction of the more complete 3D shape and the capture of the corresponding color texture, the EIA generation part including the generation of the sub-images and the pixel mapping for the EIA, and the 3D display part showing the EIA through the MLA for the viewers.
2.1Comparison of the Conventional and Proposed II Pickup System
In the conventional II pickup system, as shown in Fig. 1(a) and (b), the EIA is generated by the MLA or the camera array. In Fig. 1(a), the real scene is captured as elemental images through each micro-lens in the MLA. The size of the scene and the accuracy of the EIA are limited by the parameters of the MLA. The cross-talk effect between neighboring micro-lenses also reduces the quality of reconstructed 3D images [20]. In Fig. 1(b), the camera array is arranged to pick up the real scene. But the camera array needs accurate calibration, operation, and synchronism. Besides, the latency and bandwidth are both the limitations.
Comparison of EIA generation in the II system (a) by the MLA, (b) by the camera array, and (c) by the proposed DPOP method.
In our proposed system, as shown in Fig. 1(c), we utilize two digital light processing projectors (DLPs) to project the structured light [21–24]. The utilization of the dual-projection, DLP1 and DLP2, avoids the error accumulation with the occlusion of a single DLP [25]. In the proposed DPOP method, DLP1 and DLP2 are used for projecting the grating patterns on the real scene, and the deformed patterns are captured by the charge coupled device (CCD). The 3D shape of the real scene is modulated in the deformed patterns. With the phase unwrapping algorithms and the mapping algorithms, the 3D shape can be extracted from the deformed patterns [26–29]. But the phase unwrapping algorithms are not suitable for the blind areas, which can cause the error accumulation in the reconstructed 3D shape. So the reconstructed 3D shape may be not complete based on single DLP projection [25]. In the proposed DPOP method, two DLPs project grating patterns from different directions, and the 3D shape is reconstructed with each of the DLPs, respectively. These 3D shapes have some imperfection, and we introduce the fusion and stitching algorithm to obtain the more complete 3D shape.
2.2Reconstruction of 3D Shape for Real Scene by DPOP Method
In this paper, the DPOP method is proposed to obtain the complete 3D shape of the real scene. Two DLPs are utilized to project grating patterns to avoid the blind areas of single DLP. The reconstructed 3D shapes can be fused together completely.
As shown in Fig. 2, the DLP1 and DLP2 are arranged in the front of the real scene and projectN grating patterns.N grating patterns are arranged by a sinusoidal rule. There is an equal 2π/N phase-shifting between the adjacent grating patterns. The CCD captures thej-th deformed patterns from the DLPi (i = 1, 2), and the intensity of the captured deformed pattern is denoted asIi(x,y,j):
wherej = 1, 2,…, andN, andx,y are the pixel coordinates in the captured deformed patterns,Ri(x,y) is the surface reflectance of the real scene,Ai(x,y) represents the background light intensity,Bi(x,y) is the fringe contrast,φi(x,y) indicates the deformed phase modified by the real scene, andσj is the phase-shifting of thej-th deformed pattern.
As the structured light illumination by single DLP [22,29], the truncated phaseφ′i(x,y) of the deformed phaseφi(x,y) can be deduced as:
According to the inverse trigonometric functions,φ′i(x,y) has a value in [−π,π). For the continuous phase distributions, the truncated phaseφ′i(x,y) needs to be unwrapped by the phase unwrapping algorithm [22,28], and the unwrapped phase is denoted asΨi(x,y). Then the phase-changing ΔΨi(x,y) between the real scene and the reference plane can be calculated. And according to the phase-to-height mapping algorithm, the height Δhi(x,y) of the captured real scene can be calculated as follows:
whereai(x,y),bi(x,y) andci(x,y) are the mapping parameters, which can be acquired by plane calibrations. After dealing with the deformed patterns information, we can get the height and contour information of the real scene. The height Δhi(x,y) obtained by the single DLPi maybe not complete because of the blind areas. However, the obtained height Δhi(x,y) is simply determined by the real scene, not the measurement system. In other words, the height Δhi(x,y) is independent of the parameters in DPOP method. So the different height Δhi(x,y) can be fused and stitched together to obtain the more complete 3D shape. The fused height ΔH(x,y) can be obtained as:
where Ωi represents the pixel regions in which the reconstructed height Δhi(x,y) has no accumulate errors in the phase unwrapping algorithm, and Ω represents the whole pixel region.
2.3Generation Method of Sub-images and EIA for II Display
For the high-precision EIA, we generate the sub-images firstly. The sub-image, which is a collection of the pixels at the same position in every elemental image, has the orthographic projection geometry. In the II display, the sub-images represent a series of directional images. As shown in Fig. 3, the real scene imaged on the EIA plane by the MLA. The parallel rays with the same directional angle θ can be extracted to form an orthographic sub-image [16,30]. The Fig. 3(a) and (b) show the generation geometries of the sub-images and EIAs with the different central depth planes. The pixel information of the sub-images is extracted from the color texture. And the pixel coordinates are decided by the central depth plane and the depth data. The depth data ΔD(x,y) can be transformed from the fused height ΔH(x,y):
where theW andH are the real width and height of the real scene, and theRw × Rh is the resolution of captured deformed pattern. The Eq. 6 converts the height ΔH(x,y) from the true height to the pixel coordinates, as the depth data ΔD(x,y). In the sub-image, as shown in Fig. 3, the pixel information K is mapping to the pixel coordinate G, and the pixel shifting between K and G is denoted as Δq. According to the geometry shown in Fig. 3, the sub-imageIθ(x,y) for the projecting angleθ can be deduced as:
whereT(x,y) is the pixel information of the color texture, and Δqx and Δqy are the components along thex andy axes of the pixel shifting Δq, respectively. The pixel shifting Δq is depend on the depth data and central depth plane, and can be calculated as:
wheredc, described by pixel coordinate, is the distance between the zero plane of the depth data (z = 0) and the central depth plane.
Geometry of generations for the sub-images and EIAs in the proposed system: (a) and (b) with the different central depth plane.
In the proposed system, the projecting angleθ can be deduced by the parameters of the II display. As shown in Fig. 3, the gap between the MLA and the EIA is g and the interval between the elemental image’s pixel and the centre is Δr. The projecting angleθ can be decided by:
For the different intervals, the projecting angleθ is different. So the parallaxes are continuous and tunable.
With the sub-images obtained for all projecting angle, the EIA can be generated by interleaving the sub-images based on the viewpoint vector rendering method efficiently [30].
3Experiments and Results
In our experiments, we use two projectors (CB-X24) as the DLP1 and the DLP2 to project N grating patterns. In our experiments,N = 4, so the phase-shiftingσj = 2π/4. The CCD (GM501-H) captures the deformed patterns in 640 × 480 pixels. The generated EIA is displayed on the II pad [15]. The experimental setup is shown in Fig. 4.
The proposed II pickup system is configured with the specification in Table 1. The distance between the CCD and the DLPs is 0.331 m, and the center depth plane is located at thedc = 0 pixel plane anddc = 130 pixel plane.
In our experiments, a “man head” is used as the real scene. We reconstruct the 3D shapes of the head with the deformed patterns (Fig. 5(a) and (c)) by each of DLPs, respectively. As shown in Fig. 5(d) and (e), the 3D shapes are not complete by single DLP. The Fig. 5(e) shows the fused 3D shape in our proposed system. From the profile shown in Fig. 5(d), we can see that the fused 3D shape is complete and no error accumulation.
Captured deformed patterns and reconstructed 3D shapes in experiments (a) and (c) the deformed patterns by DLP1 and DLP2, (b) and (d) the 3D shapes reconstructed with (a) and (c), (e) the fused 3D shape in the proposed system, and (f) the profile of the fused 3D shape (y = 230).
We generate the sub-images by the proposed method from the depth data and color texture of the head as shown in Fig. 6(a)–(c). The projecting angleθ is continuous and tunable. The EIAs generated with the different central depth planes are shown in Figs. 6(d) and (e).
Sub-images and generated EIAs by the proposed method (a), (b), and (c) the sub-images, (d) and (e) the EIAs with the different central depth planes.
When the viewer moves in the front of the II display, the reconstructed images from different positions are captured, as shown in Fig. 7.
Different views of the reconstructed 3D images (a) top view, (b) left view, (c) front view, (d) right view, and (e) bottom view.
4Conclusion
A dual-projection based high-precision II pickup system for the real scene is proposed in this paper. The proposed system takes advantage of the high accuracy of the structured light illumination, and simplifies the optical and mechanical structure for the capturing of real scene. With the high-precision depth data and color texture, the continuous and tunable parallaxes are generated in the experiments. By the proposed II pickup system, the high-quality EIA can be generated efficiently.
References
Lippmann, G.: La photographie integrale. C. R. Acad. Sci.146, 446–451 (1908)
Hong, J., Kim, Y., Choi, H.J., Hahn, J., Park, J.H., Kim, H., Min, S.W., Chen, N., Lee, B.: Three-dimensional display technologies of recent interest: principles, status, and issues. Appl. Opt.50(34), H87–H115 (2011)
Xiao, X., Javidi, B., Martinez-Corral, M., Stern, A.: Advances in three-dimensional integral imaging: sensing, display, and applications [Invited]. Appl. Opt.52(4), 546–560 (2013)
Okano, F., Hoshino, H., Arai, J., Yuyama, I.: Real-time pickup method for a three-dimensional image based on Integral Photography. Appl. Opt.36(7), 1598–1603 (1997)
Jang, J.S., Javidi, B.: Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics. Opt. Lett.27(5), 324–326 (2002)
Yoo, H.: Axially moving a lenslet array for high-resolution 3D images in computational integral imaging. Opt. Express21(7), 8873–8878 (2013)
Arai, J., Okui, M., Yamashita, T., Okano, F.: Integral three-dimensional television using a 2000-scanning-line video system. Appl. Opt.45(8), 1704–1712 (2006)
Xu, Y., Wang, X.R., Sun, Y., Zhang, J.Q.: Homogeneous light field model for interactive control of viewing parameters of integral imaging displays. Opt. Express20(13), 14137–14151 (2012)
Sang, X.Z., Fan, F.C., Jiang, C.C., Choi, S., Dou, W.H., Yu, C., Xu, D.: Demonstration of a large-size realtime full-color three-dimensional display. Opt. Lett.34(24), 3803–3805 (2009)
Moon, I., Javidi, B.: Three-dimensional recognition of photon-starved events using computational integral imaging and statistical sampling. Opt. Lett.34(6), 731–733 (2009)
Navarro, H., Dorado, A., Saavedra, G., Llavador, A., Martínez-Corral, M., Javidi, B.: Is it worth using an array of cameras to capture the spatio-angular information of a 3D scene or is it enough with just two? In: Proceedings of SPIE vol. 8384, pp. 838406–838406-7 (2012)
Liao, H., Iwahara, M., Hata, N., Dohi, T.: High-quality integral videography using a multiprojector. Opt. Express12(6), 1067–1076 (2004)
Igarashi, Y., Murata, H., Ueda, M.: 3D display system using a computer generated integral photography. Jpn. J. Appl. Phys.17(9), 1683–1684 (1978)
Jang, Y.H., Park, C., Jung, J.S., Park, J.H., Kim, N., Ha, J.S., Yoo, K.H.: Integral imaging pickup method of bio-medical data using GPU and Octree. J. Korea Contents Assoc.10(6), 1–9 (2010)
Ji, C.C., Luo, C.G., Deng, H., Li, D.H., Wang, Q.H.: Tilted elemental image array generation method for moiré-reduced computer generated integral imaging display. Opt. Express21(17), 19816–19824 (2013)
Kwon, K.C., Park, C., Erdenebat, M.U., Jeong, J.S., Choi, J.H., Kim, N., Park, J.H., Lim, Y.T., Yoo, K.H.: High speed image space parallel processing for computer-generated integral imaging system. Opt. Express20(2), 732–740 (2012)
Li, G., Kwon, K.C., Shin, G.H., Jeong, J.S., Yoo, K.H., Kim, N.: Simplified integral imaging pickup method for real objects using a depth camera. J. Opt. Soc. Korea16(4), 381–385 (2012)
Jiao, X.X., Zhao, X., Yang, Y., Fang, Z.L., Yuan, X.C.: Dual-camera enabled real-time three-dimensional integral imaging pick-up and display. Opt. Express20(25), 27304–27311 (2012)
Cooperation Stereo Vision.http://www.cs.cmu.edu/clz/stereo.html
Kavehvash, Z., Mehrany, K., Bagheri, S.: Optimization of the lens-array structure for performance improvement of integral imaging. Opt. Lett.36(20), 3993–3995 (2011)
Srinivasan, V., Liu, H.C., Halioua, M.: Automated phase-measuring profilometry of 3-D diffuse objects. Appl. Opt.23(18), 3105–3108 (1984)
Kim, E.H., Hahn, J., Kim, H., Lee, B.: Profilometry without phase unwrapping using multi-frequency and four-step phase-shift sinusoidal fringe projection. Opt. Express17(10), 7818–7830 (2009)
Kim, J., Jung, J.H., Jang, C., Lee, B.: Real-time capturing and 3D visualization method based on integral imaging. Opt. Express21(16), 18742–18753 (2013)
Schaffer, M., Grosse, M., Kowarschik, R.: High-speed pattern projection for three-dimensional shape measurement using laser speckles. Appl. Opt.49(18), 3622–3629 (2010)
Su, L., Su, X., Li, W., Xiang, L.: Application of modulation measurement profilometry to objects with surface holes. Appl. Opt.38(7), 1153–1158 (1999)
Ou, P., Li, B., Wang, Y., Zhang, S.: Flexible real-time natural 2D color and 3D shape measurement. Opt. Express21(14), 16736–16741 (2013)
Zhang, S., Van Der Weide, D., Oliver, J.: Superfast phase-shifting method for 3-D shape measurement. Opt. Express18(9), 9684–9689 (2010)
Liu, K., Wang, Y., Lau, D.L., Hao, Q., Hassebrook, L.G.: Dual-frequency pattern scheme for high-speed 3-D shape measurement. Opt. Express18(5), 5229–5244 (2010)
Xu, Y., Jia, S., Bao, Q., Chen, H., Yang, J.: Recovery of absolute height from wrapped phase maps for fringe projection profilometry. Opt. Express22(14), 16819–16828 (2014)
Park, K.S., Min, S.W., Cho, Y.: Viewpoint vector rendering for efficient elemental image generation. IEICE Trans. Inf. Syst.E 90-D, 233–241 (2007)
Kang, H.H., Lee, J.H., Kim, E.S.: Enhanced compression rate of integral images by using motion-compensated residual images in three-dimensional integral-imaging. Opt. Express20(5), 5440–5459 (2012)
Lee, J.J., Shin, D.H., Lee, B.G.: Simple correction method of distorted elemental images using surface markers on lenslet array for computational integral imaging reconstruction. Opt. Express17(20), 18026–18037 (2009)
Acknowledgment
The work is supported by the NSFC under Grant Nos. 61225022 and 61320106015, the “973” Program under Grant No. 2013CB328802, and the “863” Program under Grant No. 2015AA015902.
Author information
Authors and Affiliations
School of Electronics and Information Engineering, Sichuan University, Chengdu, 610065, China
Zhao-Long Xiong, Qiong-Hua Wang, Huan Deng & Yan Xing
- Zhao-Long Xiong
You can also search for this author inPubMed Google Scholar
- Qiong-Hua Wang
You can also search for this author inPubMed Google Scholar
- Huan Deng
You can also search for this author inPubMed Google Scholar
- Yan Xing
You can also search for this author inPubMed Google Scholar
Corresponding author
Correspondence toQiong-Hua Wang.
Editor information
Editors and Affiliations
Department of Electronic Engineering, Tsinghua University, Beijing, China
Yu-Jin Zhang
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Xiong, ZL., Wang, QH., Deng, H., Xing, Y. (2015). Dual-Projection Based High-Precision Integral Imaging Pickup System. In: Zhang, YJ. (eds) Image and Graphics. ICIG 2015. Lecture Notes in Computer Science(), vol 9218. Springer, Cham. https://doi.org/10.1007/978-3-319-21963-9_19
Download citation
Published:
Publisher Name:Springer, Cham
Print ISBN:978-3-319-21962-2
Online ISBN:978-3-319-21963-9
eBook Packages:Computer ScienceComputer Science (R0)
Share this paper
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative