Movatterモバイル変換


[0]ホーム

URL:


Skip to main content

Advertisement

Springer Nature Link
Log in

3D Fluid Flow Reconstruction Using Compact Light Field PIV

  • Conference paper
  • First Online:

Part of the book series:Lecture Notes in Computer Science ((LNIP,volume 12361))

Included in the following conference series:

  • 4213Accesses

Abstract

Particle Imaging Velocimetry (PIV) estimates the fluid flow by analyzing the motion of injected particles. The problem is challenging as the particles lie at different depths but have similar appearances. Tracking a large number of moving particles is particularly difficult due to the heavy occlusion. In this paper, we present a PIV solution that uses a compact lenslet-based light field camera to track dense particles floating in the fluid and reconstruct the 3D fluid flow. We exploit the focal symmetry property in the light field focal stacks for recovering the depths of similar-looking particles. We further develop a motion-constrained optical flow estimation algorithm by enforcing the local motion rigidity and the Navier-Stoke fluid constraint. Finally, the estimated particle motion trajectory is used to visualize the 3D fluid flow. Comprehensive experiments on both synthetic and real data show that using a compact light field camera, our technique can recover dense and accurate 3D fluid flow.

This work was performed when Zhong Li was a visiting student at LSU.

You have full access to this open access chapter, Download conference paper PDF

Similar content being viewed by others

Keywords

1Introduction

Recovering time-varying volumetric 3D fluid flow is a challenging problem. Successful solutions can benefit applications in many science and engineering fields, including oceanology, geophysics, biology, mechanical and environmental engineering. In experimental fluid dynamics, a standard methodology for measuring fluid flow is called Particle Imaging Velocimetry (PIV) [1]: the fluid is seeded with tracer particles, whose motions are assumed to follow the fluid dynamics faithfully, then the particles are tracked over time and their motion trajectories in 3D are used to represent the fluid flows.

Although being highly accurate, existing PIV solutions usually require complex and expensive equipment and the setups end up bulky. For example, standard laser-based PIV methods [6,16] use ultra high speed laser beam to illuminate particles in order to track their motions. One limitation of these method is that the measured motion field only contains 2D in-plane movement restricted on the 2D fluid slice being scanned, as the laser beam can only scan one depth layer at a time. To fully characterize the fluid, it is necessary to recover the 3D flow motion within the entire fluid volume. Three-dimensional PIV such as tomographic PIV (Tomo-PIV) [9] use multiple cameras to capture the particles and resolve their depths in 3D using multi-view stereo. But such multi-camera systems need to be well calibrated and fully synchronized. More recently, the Rainbow PIV solutions [46,47] use color to encode particles at different depths in order to recover the 3D fluid flow. However, this setup requires specialized illumination source with diffractive optics for color-encoding and the optical system needs to be precisely aligned.

In this paper, we present a flexible and low-cost 3D PIV solution that only uses one compact lenslet-based light field camera as the acquisition device. A light field camera, in essence, is a single-shot, multi-view imaging device [33]. The captured light field records 4D spatial and angular light rays scattered from the tracer particles. As commercial light field cameras (e.g. Lytro Illum and Raytrix R42) can capture high resolution light field, we are able to resolve dense particles in 3D fluid volume. Small baseline of the lenslet array further helps recover subtle particle motions at sub-pixel level. In particular, our method benefits from the post-capture refocusing capability of light field. We use the focal stack to establish correspondences among particles at different depths. To resolve heavily occluded particles, we exploit the focal stack symmetry (i.e., intensities are symmetric in the focal stack around the ground truth disparity [25,41]) for accurate particle 3D reconstruction.

Given the 3D locations of particles at each time frame, we develop a physics-based optical flow estimation algorithm to recover the particle’s 3D velocity field, which represents the 3D fluid flows. In particular, we introduce two new regularization terms to refine the classic variational optical flow [17]: 1) one-to-one particle correspondence term to maintain smooth and consistent flow motions across different time frames; and 2) divergence-free regularization term derived from the Navier-Stoke Equations to enforce the physical properties of incompressible fluid. These terms help resolve ambiguities in particle matching caused by similar appearances while enforcing the reconstruction to obey physical laws. Through synthetic and real experiments, we show that using a simple single camera setup, our approach outperforms state-of-the-art PIV solutions on recovering volumetric 3D fluid flows of various types.

2Related Work

In computer vision and graphics, much effort has been made in modeling and recovering transparent objects or phenomena directly from images (e.g., fluid [32,49], gas flows [4,20,28,48], smoke [12,14], and flames [13,19],etc.). As these objects do not have their own appearances, often a known pattern is assumed and the light paths traveled through the transparent medium are estimated for 3D reconstruction. A comprehensive survey can be found in [18]. However, many of these imaging techniques are designed to recover the 3D density field, which does not explicitly reveal the internal flow motion.

Our method, instead, aims at estimating the 3D flow motion in terms of velocity field. The measurement procedure is similar to the Particle Imaging Velocimetry (PIV) method that estimates flow motion from movement by injecting tracer particles. Traditional PIV [6,16] recovers 2D velocity fields on thin fluid slices using high speed laser scanning. As 3D volumetric flow is critical to fully characterize the fluid behavior, recovering a 3D velocity field within the entire volume is of great interest.

To recover 3D velocity field of a dense set of particles, stereoscopic cameras [3,35] are used to estimate the particle depth. Tomographic PIV (Tomo-PIV) [9,22,36] use multiple (usually three to six) cameras to determine 3D particle locations by space carving. Aguirre-Pabloet al. [2] perform Tomo-PIV using mobile devices. However, the accuracy of reconstruction is compromised due to the low resolution of mobile cameras. Other notable 3D PIV approaches include defocusing PIV [21,45], Holographic PIV [39,50], and synthetic aperture PIV [5,31]. All these systems use an array of cameras for acquisition and each measurement requires elaborate calibration and synchronization. In contrast, our setup is more flexible by using a single compact light field camera. Recently proposed rainbow PIV [46,47] use color-coded illumination to recover depth from a single camera. However, both the light source and camera are customized with special optical elements and only sparse set of particles can be resolved. Proof-of-concept simulations [27] and experiments [10] using compact light field or plenoptic cameras for PIV have been performed and showed efficacy. However, the depth estimation and particle tracking algorithms used in these early works are rather primitive and are not optimized according to light field properties. As result, the recovered particles are relatively sparse and the reconstruction accuracy is lower than traditional PIV. Shiet al. [37,38] use ray tracing to estimate particle velocity with a light field camera, and conduct comparison with Tomo-PIV. In our approach, we exploit the focal stack symmetry [25] of light fields for more accurate depth reconstruction in presence of heavily occluded dense particles.

To recover the flow motion, standard PIV uses 3D cross-correlation to match local windows between neighboring time frames [9,44]. Although many improvements (for instance, matching with adaptive window sizes [22]) have been made, the window-based solutions suffer problems at regions with few visible particles. Another class of methods directly track the path of individual particles over time [29,36]. However, with increased particle density, tracking is challenging under occlusions. Heitzet al. [15] propose the application of variational optical flow to fluid flow estimation. Vedulaet al. [43] extend the optical flow to dynamic environment and introduce the scene flow. Lv et al. [26] use a neural network to recover 3D scene flow. Unlike natural scenes that have diverse features, our PIV scenes only contain similar-looking particles. Therefore, existing optical flow or scene flow algorithms are not directly applicable to our problem. Some methods [23,47] incorporate physical constraints such as the Stokes equation into optical flow framework to recover fluid flows that obey physical laws. However, these physics-based regularizations are in high-orders and are difficult to solve. In our approach, we introduce two novel regularization terms: 1) rigidity-enforced particle correspondence term and 2) divergence-free term to refine the basic variational optical flow framework for estimating the motion of dense particles.

Fig. 1.
figure 1

Overall pipeline of our light field PIV 3D fluid flow reconstruction algorithm.

3Our Approach

Figure 1 shows the algorithmic pipeline of volumetric 3D fluid flow reconstruction using light field PIV. For each time frame, we first detect particles in the light field sub-aperture images using the IDL particle detector [7]. We then estimate particle depths through a joint optimization that exploits light field properties. After we obtain 3D particle locations, we compare two consecutive frames to establish one-to-one particle correspondences and finally solve the 3D velocity field using a constrained optical flow.

3.13D Particle Reconstruction

We first describe our 3D particle reconstruction algorithm that exploits various properties of light field.

Focal Stack Symmetry. A focal stack is a sequence of images focused at different depth layers. Due to the post-capture refocusing capability, a focal stack can be synthesized from a light field by integrating captured light rays. Linet al. [25] conduct symmetry analysis on focal stacks and show thatnon-occluding pixels in a focal stack exhibit symmetry along the focal dimension centered at the in-focus slice. In contrast, occluding boundary pixels exhibit local asymmetry as the outgoing rays are not originated from the same surface. Such property is called focal stack symmetry. As shown in Fig. 2, in a focal stack, a particle exhibits symmetric defocus effect centered at the in-focus slice. It’s also worth noting that occluded particles could be seen in the focal stack as the occluder becomes extremely out-of-focus. Utilizing the focal stack symmetry helps resolve heavily occluded particles and hence enhances the accuracy and robustness of particle depth estimation.

Fig. 2.
figure 2

Focal stack symmetry. We show zoom-in views of four focal slices on the right. A particle exhibits symmetric defocus effect (e.g., 31.5 mm and 36.5 mm slices) centered at the in-focus slice (34 mm). In the 39 mm slice, an occluded particle could be seen as the occluder becomes extremely out-of-focus.

Given a particle light field, we synthesize a focal stack from the sub-aperture images by integrating rays from the same focal slice. Each focus slicef has a corresponding disparityd that indicates the in-focus depth layer. LetI(pf) be the intensity of a pixelp at focal slicef. For symmetry analysis, we define an in-focus score\(\kappa (p,f)\) a pixelp at focal slicef as:

$$\begin{aligned} \begin{aligned} \kappa (p,f) = \int _0^{\delta _{max}} \rho (I(p,f+\delta )-I(p,f-\delta )) d\delta \end{aligned} \end{aligned}$$
(1)

where\(\delta \) represents tiny disparity/focal shift and\(\delta _{max}\) is maximum shift amount;\(\rho (\nu )=1-e^{-|\nu |_2/(2\sigma ^2)}\) is a robust distance function with\(\sigma \) controlling its sensitivity to noises. According to the focal stack symmetry, the intensity profileI(pf) is locally symmetric around the true surface depth. Therefore, if the pixelp is in focus at its true depth sparity\(\hat{d}\),\(\kappa (p,\hat{d})\) should be 0. Hence given an estimated disparityd atp, the closer distance betweend and\(\hat{d}\), the smaller the\(\kappa (p,\hat{d})\). We then formulate the focal stack symmetry term\(\beta _{fs}\) for particle depth estimation by summing up\(\kappa (p,d)\) for all pixels in a focal slicef with disparityd:

$$\begin{aligned} \begin{aligned} \beta _{fs}(d)=\sum _{p}\kappa (p,d) \end{aligned} \end{aligned}$$
(2)

Color and Gradient Consistency. Besides the focal stack symmetry, we also consider the color and gradient data consistency across sub-aperture images for depth estimation using data terms similar to [25]. Specifically, by comparing each sub-aperture image with the center view, we define a cost metricC(ipd) as:

$$\begin{aligned} \begin{aligned} C(i,p,d)=|I_c(\omega (p)) - I_i(\omega (p+ d(p)\chi (i)))| \end{aligned} \end{aligned}$$
(3)

wherei is the sub-aperture image index;\(I_c\) and\(I_i\) refers to the center view and sub-aperture image respectively;\(\omega (p)\) refers to a small local window centered around pixelp;d(p) is an estimate disparity at pixelp; and\(\chi (i)\) is a scalar that scale the disparityd(p) according to the relative position between\(I_c\) and\(I_i\) asd(p) is the pixel-shift between neighboring sub-aperture images.

The cost metricC measures the intensity similarity between shifted pixels in sub-aperture images given an estimated disparity. By summing upC for all pixels, we obtain the sum of absolute differences (SAD) term for color consistency measurement:

$$\begin{aligned} \begin{aligned} \beta _{sad}(d)=\frac{1}{N} \sum _{i\in N}\sum _{p} C \end{aligned} \end{aligned}$$
(4)

where N is the total number of sub-aperture images (excluding the center view).

Besides the color consistency, we also consider the consistency in gradient domain. We first take partial derivates of cost metricC (Eq. 3) in bothx andy directions:\(D_x= {\partial }C/{\partial x}\) and\(D_y= {\partial }C/{\partial y}\) and then formulate the following weighted sum of gradient differences (GRAD) for gradient consistency measurement:

$$\begin{aligned} \begin{aligned} \beta _{grad}(d)=\frac{1}{N} \sum _{i\in N}\sum _{p} \mathcal {W}(i)\mathcal {D}_x + (1-\mathcal {W}(i))\mathcal {D}_y \end{aligned} \end{aligned}$$
(5)

In Eq. 5,\(\mathcal {W}(i)\) is a weighing factor that determines the contribution of horizontal gradient cost (\(\mathcal {D}_x\)) according to the relative positions of the two sub-aperture images being compared. It is defined as\(\mathcal {W}(i) = \frac{\varDelta i_x}{\varDelta i_x+\varDelta i_y}\), where\(\varDelta i_x\) and\(\varDelta i_y\) are the position differences between sub-aperture images alongx andy directions. For example,\(\mathcal {W}(i) = 1\) if the target view is located at the horizontal extent of the reference view. In this case, only the gradient costs in thex direction are aggregated.

Particle Depth Estimation. Finally, combining Eq. 2,4, and5, we form the following energy function for optimizing the particle disparityd:

$$\begin{aligned} \begin{aligned} \beta (d) = \beta _{fs}(d) + \lambda _{sad}\beta _{sad}(d) + \lambda _{grad}\beta _{grad}(d) \end{aligned} \end{aligned}$$
(6)

In our experiments, the two weighting factors are set as\(\lambda _{sad}=0.8\) and\(\lambda _{grad}=0.9\). We use the Levenberg-Marquardt (LM) optimization to solve Eq. 6. Finally, using the calibrated light field camera intrinsic parameters, we are able to convert the particle disparity map to 3D particle location. The pipeline of our 3D particle reconstruction algorithm is shown in Fig. 3.

Fig. 3.
figure 3

Our 3D particle reconstruction algorithm pipeline.

3.2Fluid Flow Reconstruction

After we reconstruct 3D particles in each frame, we compare two consecutive frames to estimate the volumetric 3D fluid flow.

Given two sets of particle locations\(S_{1}\) and\(S_{2}\) recovered from consecutive frames, we first convert\(S_1\) and\(S_2\) into voxelized 3D volumes as occupancy probabilities\(\varTheta _1\) and\(\varTheta _2\) through linear interpolation. Our goal is to solve per-voxel 3D velocity vector\(\mathbf {u} = [u,v,w]\) for the whole volume.

In particular, we solve this problem under the variational optical flow framework [17] and propose two novel regularization terms, the correspondence term and the divergence-free term, for improved accuracy and efficiency. Our overall energy function\(E_{total}\) is combination of regularization terms and is written as:

$$\begin{aligned} \begin{aligned} E_{total} = E_{data}+\lambda _1E_{smooth} +\lambda _2E_{corres}+\lambda _3E_{div} \end{aligned} \end{aligned}$$
(7)

where\(\lambda _1\),\(\lambda _2\), and\(\lambda _3\) are term balancing factors. Please see our supplementary material for mathematical details of solving this energy function. In the following, we describe the algorithmic details of each regularization term.

Basic Optical Flow. The data term\(E_{data}\) and smooth term\(E_{smooth}\) are adopted from basic optical flow. They are derived from the brightness constancy assumption.\(E_{data}\) enforces consistency between occupancy possibilities\(\varTheta _1\) and\(\varTheta _2\) at corresponding voxels and\(E_{smooth}\) constrain the fluid motion to be piece-wise smooth. In our case,\(E_{data}\) and\(E_{smooth}\) can be written as:

$$\begin{aligned} \begin{aligned} E_{data}(\mathbf {u}) = \int ||\varTheta _2(\mathbf {p} + \mathbf {u} ) - \varTheta _1(\mathbf {p})||^2_2 d\mathbf {p}\\. \end{aligned} \end{aligned}$$
(8)
$$\begin{aligned} \begin{aligned} E_{smooth}(\mathbf {u}) = ||\nabla \cdot \mathbf {u}||^2_2 \end{aligned} \end{aligned}$$
(9)

where\(\mathbf {p}\) refers to a voxel in fluid volume and\(\nabla \) is the gradient operator.

Correspondence Term. We propose a novel correspondence term for more accurate flow estimation. Notice that\(E_{data}\) in the basic optical flow only enforces voxel-level consistency while particle-to-particle correspondences are not guaranteed. We therefore develop a correspondence term\(E_{corres}\) to enforce one-to-one particle matching.\(E_{corres}\) helps improve matching accuracy especially in regions with high particle density.

Let’s consider two sets of particles:\(S_1=\{s_1|s_1\in \mathbb {R}^3\}\) as reference and\(S_2=\{s_2|s_2\in \mathbb {R}^3\}\) as target.\(E_{corres}\) enforces the one-to-one particle matching between the target and reference sets. To formulate\(E_{corres}\), we first estimate correspondences between particles in\(S_1\) and\(S_2\). We solve this problem by estimating transformations that map particles in\(S_1\) to\(S_2\).

In particular, we employ a deformable graph similar to [42] that considers local geometric similarity and rigidity. To build the graph, we uniformly sample a set of particles in\(S_1\) and use them as graph nodes\(\mathbf {G}=\{g_1,g_2,g_3,...,g_m\}\). We then aim to estimate a set of affine transformations\(\mathbf {A}=\{A_i\}^m_{i=1}\) and\(\mathbf {b}=\{b_i\}_{i=1}^m\) for each graph node. We then use these graph nodes as control points to deform particles in\(S_1\) instead of computing transformations for individual particles. Given the graph node transformations\(\mathbf {A}\) and\(\mathbf {b}\), we can transform every particle\(s_1\in S_1\) to it new location\(s_1'\) using a weighted linear combination of graph nodes transformations:

$$\begin{aligned} \begin{aligned} s_1^{'} = f(s_1,\mathbf {A},\mathbf {b})=\sum _{i=1}^m \varpi _{i}(s_1)(\mathbf {A}(s_1-g_i)+g_i+b_i) \end{aligned} \end{aligned}$$
(10)

where the weight\(\varpi _{i}(s_1) = max(0,(1-||s_1 - g_i||^2/R^2)^3)\) models a graph node\(g_i\) influence on a particle\(s_1 \in S_1\) according to their Euclidean distance. This restricts the particle transformation to be only affected by nearby graph nodes. In our experiment, we consider the nearest four graph nodes andR is the particle’s distance to its nearest graph node.

To obtain the graph node transformations\(\mathbf {A}\) and\(\mathbf {b}\), we solve an optimization problem with energy function:

$$\begin{aligned} \begin{aligned} \varPsi _{total} =\varPsi _{data}+\alpha _1\varPsi _{rigid}+\alpha _2\varPsi _{smooth} \end{aligned} \end{aligned}$$
(11)

\(\varPsi _{data}\) is the data term aims to minimize particle-to-particle distances after transformation and is thus formulated as:

$$\begin{aligned} \begin{aligned} \varPsi _{data} = \sum _{s_1\in S_1}||s_1' - c_i||^2 \end{aligned} \end{aligned}$$
(12)

where\(c_i\) is the closest point to\(s_1^{'}\) in\(S_2\).

\(\varPsi _{rigid}\) is a rigidity regularization term that enforces the local rigidity of affine transformation.\(\varPsi _{rigid}\) can be written as:

$$\begin{aligned} \begin{aligned} \varPsi _{rigid} = \sum _{\mathbf {G}}||A_i^TA_i - \mathbb {I}||_F^2 + (det(A_i)-1)^2 \end{aligned} \end{aligned}$$
(13)

where\(\mathbb {I}\) is an identity matrix.

The last term\(\varPsi _{smooth}\) enforces the spatial smoothness of nearby nodes and is written as:

$$\begin{aligned} \begin{aligned} \varPsi _{smooth} = \sum _{\mathbf {G}}\sum _{k\in \varOmega (i)} ||A_i(g_k-g_i)+g_i+b_i-(g_k+b_k)||^2 \end{aligned} \end{aligned}$$
(14)

where\(\varOmega (i)\) refers to the set of nearest four neighbors of\(g_i\).

The overall energy function\(\varPsi _{total} \) can be optimized with an iterative Gauss-Newton algorithm and the affine transformations\(\mathbf {A}\) and\(\mathbf {b}\) are thus solved. In our experiment, we use\(\alpha _1 = 50\) and\(\alpha _2 = 10\) for Eq. 11.

By applying Eq. 11, we can transform every particle\(s_1\in S_1\) to it new location\(s_1'\) using the graph nodes’ transformations. We then find\(S_1\)’s corresponding set\(S_2^c\) in the target\(S_2\) using a nearest neighbor search (ie,\(s_2^c \,=\,\)nnsearch\((s_1',s_2)\)). After we establish the one-to-one correspondences between\(S_1\) and\(S_2\), our correspondence term can be formulated based on the color consistency assumption as follow:

$$\begin{aligned} \begin{aligned} E_{corres}(\mathbf {u},S_1,S_2^c) = \sum _{s_1\in S_1,s_2^c\in S_2^c}||s_2^c - (s_1 + \mathbf {u}(s_1))||_2^2 \end{aligned} \end{aligned}$$
(15)

We show the effectiveness of the correspondence term by comparing the velocity field obtained with vs. without\(E_{corres}\). The results are shown in Fig. 4. This comparison demonstrates that our correspondence term greatly improves matching accuracy and hence benefits flow reconstruction.

Fig. 4.
figure 4

Particle matching between source and target volumes with vs. without using the correspondence term\(E_{corres}\). In our plots, green lines indicate correct correspondences and red lines indicate incorrect ones. (Color figure online)

Divergence-Free Term. To enforce the physical properties of incompressible fluid, we add a divergence-free regularization term\(E_{div}\) to the optical flow framework. Based on the Navier-Stoke equations, fluid velocity\(\mathbf {u}\) can be split into into two distinct components: irrotational component\(\nabla P\) and solenoidal component\(\mathbf {u}_{sol}=[u_{sol},v_{sol},w_{sol}]\) with the Helmholtz decomposition. The Irrotational component\(\nabla P\) is curl-free and is determined by the gradient of a scalar functionP (eg, pressure). The solenoidal component\(\mathbf {u}_{sol}\) is divergence-free and models an incompressible flow. From the divergence-free property, we have:

$$\begin{aligned} \begin{aligned} \nabla \cdot \mathbf {u}_{sol} = 0 \end{aligned} \end{aligned}$$
(16)

where\(\nabla = [\frac{\partial }{\partial x},\frac{\partial }{\partial y},\frac{\partial }{\partial z}]^T\) is the divergence operator. Since\(\mathbf {u}=\mathbf {u}_{sol}+\nabla P\), taking divergence on both sides, we have:

$$\begin{aligned} \begin{aligned} \nabla \cdot \mathbf {u} = \nabla ^2 P \end{aligned} \end{aligned}$$
(17)

We solve Eq. 17 by Poisson integration and compute the scalar field as\( P = (\nabla ^2)^{-1} (\nabla \cdot \mathbf {u})\). We then project\(\mathbf {u}\) into the divergence-free vector field:\(\mathbf {u}_{sol} = \mathbf {u} - \nabla P\). Similar to [11], we formulate a divergence-free term\(E_{div}\) that enforces the flow velocity field\(\mathbf {u}\) close to its divergence-free component\(\mathbf {u}_{sol}\):

$$\begin{aligned} \begin{aligned} E_{div}(\mathbf {u}) = ||\mathbf {u} - \mathbf {u}_{sol}||_2^2 \end{aligned} \end{aligned}$$
(18)

4Experimental Results

To evaluate our fluid flow reconstruction algorithm, we perform experiments on both synthetic and real data under the light field PIV setting. We also evaluate our method on the John Hopkins Turbulence Database (JHUTDB) [24,34] that has the ground truth fluid flow. All experiments are performed on a PC with Intel i7-4700K CPU with 16G of memory. On the computational time, the entire process takes about 2 min: 30 s for particle location estimation and 40 s for correspondence matching, and 50 s for velocity field reconstruction.

4.1Synthetic Data

We first evaluate our proposed approach on simulated flows: a vortex flow and a drop flow. The flows are simulated within a volume of\(100 \times 100 \times 20 \) voxels. We randomly sample tracer particles within the fluid volume. The particle density is 0.02 per voxel. We render light fields images with angular resolution\(7 \times 7\) and spatial resolution\(434 \times 625\). We simulate the advection of particles over time following the method in [40]. We apply our algorithms on the rendered light fields to recover 3D fluid flows. In Fig. 5, we show our recovered velocity fields in comparison with the ground truth ones. Qualitatively, our reconstructed vector fields are highly consistent with the ground truth ones.

Fig. 5.
figure 5

Synthetic results in comparison with the ground truth.

We perform quantitative evaluations using two error metrics: the average end-point error (AEE) and the average angular error (AAE). AEE is computed as the averaged Euclidean distance between the estimated particle positions and ground truth ones. AAE is computed with the average difference of vector in the velocity field. We compare our method with the multi-scale Horn-Schunck (H & S) [30] and the rainbow PIV [47]. Specifically, we apply H & S on our recovered 3D particles and use it as the baseline algorithm for flow estimation. With this comparison, we hope to demonstrate the effectiveness of our regularization terms in flow estimation. For rainbow PIV, we have implemented a renderer to generate depth-dependent spectral images of virtual particles. To ensure fairness, the rendered images have the same spatial resolution as our input light field (ie,\(434 \times 625\)).

We also perform ablation study by testing two variants of our method: “w/o\(E_{corres}\)” that takes out the correspondence term and “w/o\(E_{div}\)” that takes out the divergence-free term. The experiments are performed on the vortex flow with particle density 0.02. Quantitative evaluations are shown in Fig. 6. The error maps of recovered velocity fields for our ablation study are shown Fig. 7. We can see that our method achieves the best performance when both regularization terms are imposed. Our outperforms both H & S and the rainbow PIV at various particle density levels. Further, our accumulated error over time grows much slower than the other two state-of-the-arts.

Fig. 6.
figure 6

Quantitative evaluation. The left two plots show errors with respect to different particle densities. The right two plots show accumulated errors over time.

Fig. 7.
figure 7

Ablation study. We show the error maps of estimated velocity field at three fluid volume slices.

4.2John Hopkins Turbulence Database (JHUTDB)

Next we conduct experiments on data generated from the Johns Hopkins Turbulence Database (JHUTDB) [24]. To reduce processing time, we crop out a volume of\(256 \times 128 \times 80\) voxels for each turbulence in the dataset. The norm of the velocity field at each location ranges from 0 to 2.7 voxels per time step. We generate random tracer particles with density 0.025 per voxels and advect the particles according to the turbulence velocity field. In our evaluation, we render two light field images at two consecutive frames to estimate the particle locations and reconstruct the velocity field. Our reconstruction results in comparison with the ground truth is shown in Fig. 8. We show our reconstructed velocity volume inxyz directions. We also show the error map of magnitudes to illustrate that our method is highly accurate.

Fig. 8.
figure 8

JHUTDB velocity field reconstruction results.

4.3Real Data

Fig. 9.
figure 9

Our real experiment setup. We use a compact light field camera in PIV setting.

We finally test our method on real captured flow data. Figure 9 shows our acquisition system for capturing real 3D flows. We use a Lytro Illum light field camera with 30 mm focal length to capture the tracer particles in fluid. As Illum does not have video mode, we use an external control board to trigger the camera at high frequency to capture consecutive time frames. Due to the limitation of on-chip image buffer size, our acquisition cannot achieve very high frame rate. In our experiment, we set the trigger frequency to be 10 Hz. The capture light field has angular resolution 15\(\times \)15 and spatial resolution\(625 \times 434\). We use the light field calibration toolbox [8] to process and decode raw light field data into sub-aperture images. We use the center view as reference for depth estimation and the effective depth volume that we are able to reconstruct is around\(600\times 500\times 200\) (mm), slightly lower than the capture image because we enforce rectangular volumes inside the perspective view frustum.

We use green polyethylene microspheres with density 1g/cc and size 1000–1180\(\upmu \)m as tracer particles. Before dispersing the particles, we mix some surfactant with the particles to reduce surface tension caused by water in order to minimize agglomeration between particles. We test on three types of flows: vortex, double vortex, and random complex flows.

Figure 10 shows our recovered fluid flow velocity field and path line visualization (please refer to the supplemental material for more reconstruction results). We show three flow types, vortex, double vortex, and random complex flows. The left column shows the velocity field between first and the second frame. The right column shows the path line visualization through 1–4 frames. We can see that our reconstructions well depicts the intended fluid motions and are highly reliable.

Fig. 10.
figure 10

Real experiment results. We show our recovered velocity fields (upper row) and path line visualizations on four consecutive frames (lower row) for three types of flows: vortex, double vortex and a random complex flow.

Fig. 11.
figure 11

Comparison result with scene flow (Lv et al. [26]) on the real data. We compare the project scene flow and the flow vector field on three types of flows.

We also compare our method with a recent state-of-the-art scene flow method  [26] on the real data. The scene flow method takes two consecutive RGB-D images as inputs and use rigidity transform network and flow network for motion estimation. Since the method also needs depth map as input, we first calculate a depth map for the center view of light field and then combine the depth map with the sub-aperture color image and use them as input for [26]. The flow estimation results are shown in Fig. 11. We show the projected scene flows and the flow vector fields for three types of flows (single vortex, double vortex, and random flow). The scene flow method fails to recover the flow structures, especially for vortex flows. This is because our particles are heavily occluded and have very similar appearances. Further, the scene flow algorithm does not take the physical properties of fluid into consideration.

5Conclusions

In this paper, we have presented a light field PIV solution that uses a commercial compact light field camera to recover volumetric 3D fluid motion from tracer particles. We have developed a 3D particle reconstruction algorithm by exploiting the light field focal stack symmetry in order to handle heavily occluded particles. To recover the fluid flow, we have refined the classical optical flow framework by introducing two novel regularization terms: 1) the correspondence term to enforce one-to-one particle matching; and 2) the divergence-free term to enforce the physical properties of incompressible fluid. Comprehensive synthetic and real experiments as well as comparisons with the state-of-the-arts have demonstrated the effectiveness of our method.

Although our method can faithfully recover fluid flows in a small to medium volume, our method still has several limitations. First of all, due to the small baseline of compact light field camera, the resolvable depth range is rather limited. As a result, our volumetric velocity field’s resolution along the z-axis is much smaller than its x- or y-resolutions. One way to enhance the z-resolution is using a second light field camera capturing the fluid volume from an orthogonal angle. Second, in our fluid flow reconstruction step, only two consecutive frames are considered. Hence motion continuity might not always be satisfied. Adding temporal constraint to our optimization framework can be further improved.

References

  1. Adrian, R.J., Westerweel, J.: Particle Image Velocimetry, vol. 30. Cambridge University Press, Cambridge (2011)

    Google Scholar 

  2. Aguirre-Pablo, A.A., Alarfaj, M.K., Li, E.Q., Hernández-Sánchez, J.F., Thoroddsen, S.T.: Tomographic particle image velocimetry using smartphones and colored shadows. In: Scientific Reports (2017)

    Google Scholar 

  3. Arroyo, M., Greated, C.: Stereoscopic particle image velocimetry. Meas. Sci. Technol.2(12), 1181 (1991)

    Article  Google Scholar 

  4. Atcheson, B., et al.: Time-resolved 3D capture of non-stationary gas flows. ACM Trans. Graph. (TOG)27, 132 (2008)

    Article  Google Scholar 

  5. Belden, J., Truscott, T.T., Axiak, M.C., Techet, A.H.: Three-dimensional synthetic aperture particle image velocimetry. Meas. Sci. Technol.21(12), 125403 (2010)

    Article  Google Scholar 

  6. Brücker, C.: 3D scanning piv applied to an air flow in a motored engine using digital high-speed video. Meas. Sci. Technol.8(12), 1480 (1997)

    Article  Google Scholar 

  7. Crocker, J.C., Grier, D.G.: Methods of digital video microscopy for colloidal studies. J. Colloid Interface Sci.179(1), 298–310 (1996)

    Article  Google Scholar 

  8. Dansereau, D.G., Pizarro, O., Williams, S.B.: Decoding, calibration and rectification for lenselet-based plenoptic cameras. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1027–1034 (2013)

    Google Scholar 

  9. Elsinga, G.E., Scarano, F., Wieneke, B., van Oudheusden, B.W.: Tomographic particle image velocimetry. Exp. Fluids41(6), 933–947 (2006)

    Article  Google Scholar 

  10. Fahringer, T., Thurow, B.: Tomographic reconstruction of a 3-D flow field using a plenoptic camera. In: 42nd AIAA Fluid Dynamics Conference and Exhibit, p. 2826 (2012)

    Google Scholar 

  11. Gregson, J., Ihrke, I., Thuerey, N., Heidrich, W.: From capture to simulation: connecting forward and inverse problems in fluids. ACM Trans. Graph. (TOG)33(4), 139 (2014)

    Article  Google Scholar 

  12. Gu, J., Nayar, S.K., Grinspun, E., Belhumeur, P.N., Ramamoorthi, R.: Compressive structured light for recovering inhomogeneous participating media. IEEE Trans. Pattern Anal. Mach. Intell.35, 1 (2013)

    Article  Google Scholar 

  13. Hasinoff, S.W., Kutulakos, K.N.: Photo-consistent reconstruction of semitransparent scenes by density-sheet decomposition. IEEE Trans. Pattern Anal. Mach. Intell.29, 870–885 (2007)

    Article  Google Scholar 

  14. Hawkins, T., Einarsson, P., Debevec, P.: Acquisition of time-varying participating media. ACM Trans. Graph. (ToG)24, 812–815 (2005)

    Article  Google Scholar 

  15. Heitz, D., Mémin, E., Schnörr, C.: Variational fluid flow measurements from image sequences: synopsis and perspectives. Exp. Fluids48(3), 369–393 (2010)

    Article  Google Scholar 

  16. Hori, T., Sakakibara, J.: High-speed scanning stereoscopic piv for 3D vorticity measurement in liquids. Meas. Sci. Technol.15(6), 1067 (2004)

    Article  Google Scholar 

  17. Horn, B.K., Schunck, B.G.: Determining optical flow. Artif. Intell.17(1–3), 185–203 (1981)

    Article  Google Scholar 

  18. Ihrke, I., Kutulakos, K.N., Lensch, H.P., Magnor, M., Heidrich, W.: Transparent and specular object reconstruction. In: Computer Graphics Forum, vol. 29, pp. 2400–2426. Wiley Online Library (2010)

    Google Scholar 

  19. Ihrke, I., Magnor, M.A.: Image-based tomographic reconstruction of flames. In: Symposium on Computer Animation (2004)

    Google Scholar 

  20. Ji, Y., Ye, J., Yu, J.: Reconstructing gas flows using light-path approximation. In: 2013 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2507–2514 (2013)

    Google Scholar 

  21. Kajitani, L., Dabiri, D.: A full three-dimensional characterization of defocusing digital particle image velocimetry. Meas. Sci. Technol.16(3), 790 (2005)

    Article  Google Scholar 

  22. Lasinger, K., Vogel, C., Schindler, K.: Volumetric flow estimation for incompressible fluids using the stationary stokes equations. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2584–2592. IEEE (2017)

    Google Scholar 

  23. Lasinger, K., Vogel, C., Schindler, K.: Volumetric flow estimation for incompressible fluids using the stationary stokes equations. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2584–2592 (2017)

    Google Scholar 

  24. Li, Y., et al.: A public turbulence database cluster and applications to study lagrangian evolution of velocity increments in turbulence. J. Turbul. (9), N31 (2008)

    Google Scholar 

  25. Lin, H., Chen, C., Bing Kang, S., Yu, J.: Depth recovery from light field using focal stack symmetry. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3451–3459 (2015)

    Google Scholar 

  26. Lv, Z., Kim, K., Troccoli, A., Sun, D., Rehg, J.M., Kautz, J.: Learning rigidity in dynamic scenes with a moving camera for 3D motion field estimation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 468–484 (2018)

    Google Scholar 

  27. Lynch, K., Fahringer, T., Thurow, B.: Three-dimensional particle image velocimetry using a plenoptic camera. In: 50th AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition, p. 1056 (2012)

    Google Scholar 

  28. Ma, C., Lin, X., Suo, J., Dai, Q., Wetzstein, G.: Transparent object reconstruction via coded transport of intensity. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3238–3245 (2014)

    Google Scholar 

  29. Maas, H., Gruen, A., Papantoniou, D.: Particle tracking velocimetry in three-dimensional flows. Exp. Fluids15(2), 133–146 (1993)

    Article  Google Scholar 

  30. Meinhardt, E., Pérez, J.S., Kondermann, D.: Horn-schunck optical flow with a multi-scale strategy. IPOL J.3, 151–172 (2013)

    Article  Google Scholar 

  31. Mendelson, L., Techet, A.H.: Quantitative wake analysis of a freely swimming fish using 3D synthetic aperture PIV. Exp. Fluids56(7), 135 (2015)

    Article  Google Scholar 

  32. Morris, N.J., Kutulakos, K.N.: Dynamic refraction stereo. IEEE Trans. Pattern Anal. Mach. Intell.33(8), 1518–1531 (2011)

    Article  Google Scholar 

  33. Ng, R., Levoy, M., Brédif, M., Duval, G., Horowitz, M., Hanrahan, P., et al.: Light field photography with a hand-held plenoptic camera. Comput. Sci. Tech. Rep. CSTR2(11), 1–11 (2005)

    Google Scholar 

  34. Perlman, E., Burns, R., Li, Y., Meneveau, C.: Data exploration of turbulence simulations using a database cluster. In: Proceedings of the 2007 ACM/IEEE Conference on Supercomputing, p. 23. ACM (2007)

    Google Scholar 

  35. Pick, S., Lehmann, F.O.: Stereoscopic PIV on multiple color-coded light sheets and its application to axial flow in flapping robotic insect wings. Exp. Fluids47(6), 1009 (2009)

    Article  Google Scholar 

  36. Schanz, D., Gesemann, S., Schröder, A.: Shake-the-box: lagrangian particle tracking at high particle image densities. Exp. Fluids57(5), 70 (2016)

    Article  Google Scholar 

  37. Shi, S., Ding, J., Atkinson, C., Soria, J., New, T.H.: A detailed comparison of single-camera light-field PIV and tomographic PIV. Exp. Fluids59, 1–13 (2018)

    Article  Google Scholar 

  38. Shi, S., Ding, J., New, T.H., Soria, J.: Light-field camera-based 3D volumetric particle image velocimetry with dense ray tracing reconstruction technique. Exp. Fluids58, 1–16 (2017)

    Article  Google Scholar 

  39. Soria, J., Atkinson, C.: Towards 3C–3D digital holographic fluid velocity vector field measurement? Tomographic digital holographic PIV (tomo-HPIV). Meas. Sci. Technol.19(7), 074002 (2008)

    Article  Google Scholar 

  40. Stam, J.: Stable fluids. In: Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, pp. 121–128. ACM Press/Addison-Wesley Publishing Co. (1999)

    Google Scholar 

  41. Strecke, M., Alperovich, A., Goldluecke, B.: Accurate depth and normal maps from occlusion-aware focal stack symmetry. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2529–2537. IEEE (2017)

    Google Scholar 

  42. Sumner, R.W., Schmid, J., Pauly, M.: Embedded deformation for shape manipulation. ACM Trans. Graph. (TOG)26, 80 (2007)

    Article  Google Scholar 

  43. Vedula, S., Baker, S., Rander, P., Collins, R.T., Kanade, T.: Three-dimensional scene flow. In: Proceedings of the Seventh IEEE International Conference on Computer Vision, vol. 2, pp. 722–729 (1999)

    Google Scholar 

  44. Wieneke, B.: Volume self-calibration for 3D particle image velocimetry. Exp. Fluids45(4), 549–556 (2008)

    Article  Google Scholar 

  45. Willert, C., Gharib, M.: Three-dimensional particle imaging with a single camera. Exp. Fluids12(6), 353–358 (1992)

    Article  Google Scholar 

  46. Xiong, J., Fu, Q., Idoughi, R., Heidrich, W.: Reconfigurable rainbow PIV for 3D flow measurement. In: 2018 IEEE International Conference on Computational Photography (ICCP), pp. 1–9. IEEE (2018)

    Google Scholar 

  47. Xiong, J., et al.: Rainbow particle imaging velocimetry for dense 3D fluid velocity imaging. ACM Trans. Graph. (TOG)36(4), 36 (2017)

    Article  Google Scholar 

  48. Xue, T., Rubinstein, M., Wadhwa, N., Levin, A., Durand, F., Freeman, W.T.: Refraction wiggles for measuring fluid depth and velocity from video. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8691, pp. 767–782. Springer, Cham (2014).https://doi.org/10.1007/978-3-319-10578-9_50

    Chapter  Google Scholar 

  49. Ye, J., Ji, Y., Li, F., Yu, J.: Angular domain reconstruction of dynamic 3D fluid surfaces. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 310–317. IEEE (2012)

    Google Scholar 

  50. Zhang, J., Tao, B., Katz, J.: Turbulent flow measurement in a square duct with hybrid holographic PIV. Exp. Fluids23(5), 373–381 (1997)

    Article  Google Scholar 

Download references

This work is partially supported by the National Science Foundation (NSF) under Grant CBET-1706130 and CRII-1948524, and the Louisiana Board of Regent under Grant LEQSF (2018-21)-RD-A-10.

Author information

Authors and Affiliations

  1. University of Delaware, Newark, DE, USA

    Zhong Li

  2. DGene, Baton Rouge, LA, USA

    Yu Ji & Jingyi Yu

  3. ShanghaiTech University, Shanghai, China

    Jingyi Yu

  4. Louisiana State University, Baton Rouge, LA, USA

    Jinwei Ye

Authors
  1. Zhong Li

    You can also search for this author inPubMed Google Scholar

  2. Yu Ji

    You can also search for this author inPubMed Google Scholar

  3. Jingyi Yu

    You can also search for this author inPubMed Google Scholar

  4. Jinwei Ye

    You can also search for this author inPubMed Google Scholar

Corresponding author

Correspondence toJinwei Ye.

Editor information

Editors and Affiliations

  1. University of Oxford, Oxford, UK

    Andrea Vedaldi

  2. Graz University of Technology, Graz, Austria

    Horst Bischof

  3. University of Freiburg, Freiburg im Breisgau, Germany

    Thomas Brox

  4. University of North Carolina at Chapel Hill, Chapel Hill, NC, USA

    Jan-Michael Frahm

1Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 2 (avi 56452 KB)

Rights and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Li, Z., Ji, Y., Yu, J., Ye, J. (2020). 3D Fluid Flow Reconstruction Using Compact Light Field PIV. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, JM. (eds) Computer Vision – ECCV 2020. ECCV 2020. Lecture Notes in Computer Science(), vol 12361. Springer, Cham. https://doi.org/10.1007/978-3-030-58517-4_8

Download citation

Publish with us


[8]ページ先頭

©2009-2025 Movatter.jp