3Shenzhen Key Laboratory of Virtual Reality and Human Interaction Technology, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China
Point clouds are a simple and lightweight 3D representation. However, point clouds obtained from 3D scans are typically sparse, irregular, and noisy, and required to be consolidated. In this paper, we present anedge-aware technique to facilitate the consolidation of point clouds. We design our network to process points grouped in local patches, and train it to learn and help consolidate points, deliberately for edges. To achieve this, we formulate a regression component to simultaneously recover 3D point coordinates and point-to-edge distances from upsampled features, and an edge-aware joint loss function to directly minimize distances from output points to surface and to edges. Compared with previous neural network based works, our consolidation isedge-aware. During the synthesis, our network can attend to the detected sharp edges and enable more accurate 3D reconstructions. Also, we trained our network on virtual scanned point clouds, demonstrated the performance of our method on both synthetic and real point clouds, presented various surface reconstruction results, and showed how our method outperforms the state-of-the-arts.
Lequan Yu, Xianzhi Li, Chi-Wing Fu, Daniel Cohen-Or, Pheng-Ann Heng. EC-Net: an Edge-aware Point set Consolidation Network. In ECCV, 2018. [Arxiv][Paper][supp]
Surface reconstruction results
We demonstrate the quality of our method by applying it to consolidate point sets and reconstruct surfaces. Our method produces consolidated point sets and improves the surface reconstruction quality, particularly on preserving the edges.
We compare our method with state-of-the-art methods on synthetic and real scanned dataset, indluding edge-aware point set resampling(EAR) method,PU-Net method, Continuous projection for fast L1 reconstruction(CLOP) method, and GMM-inspired feature-preserving point set filtering(GPF) method.
Besides the above comparison, we further evaluate our method on the reconstruction benchmark models (Berger et al.), and include a recent method for large-scale surface reconstruction ( Ummenhofer et al.). The comparison clearly shows that the reconstructions with our consolidation better preserve the sharp edges and have better visual quality even on the noisy models from the benchmark dataset with random error and systematic error.
Results on real scans
We also apply our method to point clouds produced from real scans downloaded from Aim@Shape and obtained from the EAR project. Real scan point clouds are often noisy and have inhomogeneous point distribution. Comparing with the input point clouds, our method is still able to generate more points near the edges and on the surface, while better preserving the sharp features.
Acknowledgements
We thank anonymous reviewers for the comments and suggestions. The work is supported by the Research Grants Council of the Hong Kong Special Administrative Region (Project no. GRF 14225616), the Shenzhen Science and Technology Program (No. JCYJ20170413162617606 and No. JCYJ20160429190300857), and the CUHK strategic recruitment fund.