When scanning a scene in real world usingLidar, the captured point clouds contain snippets of the scene, which requires alignment to generate a full map of the scanned environment.
Point clouds are often aligned with 3D models or with other point clouds, a process termedpoint set registration.
For industrial metrology or inspection usingindustrial computed tomography, the point cloud of a manufactured part can be aligned to an existing model and compared to check for differences.Geometric dimensions and tolerances can also be extracted directly from the point cloud.
An example of a 1.2 billion data point cloud render ofBeit Ghazaleh, a heritage site in danger inAleppo (Syria)[8]Generating or reconstructing 3D shapes from single or multi-viewdepth maps or silhouettes and visualizing them in dense point clouds[9]
While point clouds can be directly rendered and inspected,[10][11] point clouds are often converted topolygon mesh ortriangle mesh models,non-uniform rational B-spline (NURBS) surface models, or CAD models through a process commonly referred to as surface reconstruction.
There are many techniques for converting a point cloud to a 3D surface.[12] Some approaches, likeDelaunay triangulation,alpha shapes, and ball pivoting, build a network of triangles over the existing vertices of the point cloud, while other approaches convert the point cloud into avolumetricdistance field and reconstruct theimplicit surface so defined through amarching cubes algorithm.[13]
Ingeographic information systems, point clouds are one of the sources used to makedigital elevation model of the terrain.[14] They are also used to generate 3D models of urban environments.[15] Drones are often used to collect a series ofRGB images which can be later processed on a computer vision algorithm platform such as on AgiSoft Photoscan, Pix4D, DroneDeploy or Hammer Missions to create RGB point clouds from where distances and volumetric estimations can be made.[citation needed]
Point clouds can also be used to represent volumetric data, as is sometimes done inmedical imaging. Using point clouds, multi-sampling anddata compression can be achieved.[16]
MPEG began standardizing point cloud compression (PCC) with a Call for Proposal (CfP) in 2017.[17][18][19] Three categories of point clouds were identified: category 1 for static point clouds, category 2 for dynamic point clouds, and category 3 for Lidar sequences (dynamically acquired point clouds). Two technologies were finally defined:G-PCC (Geometry-based PCC, ISO/IEC 23090 part 9)[20] for category 1 and category 3; andV-PCC (Video-based PCC, ISO/IEC 23090 part 5)[21] for category 2. The first test models were developed in October 2017, one forG-PCC (TMC13) and another one forV-PCC (TMC2). Since then, the two test models have evolved through technical contributions and collaboration, and the first version of the PCC standard specifications was expected to be finalized in 2020 as part of the ISO/IEC 23090 series on the coded representation of immersive media content.[22]
^Levoy, M. and Whitted, T.,"The use of points as a display primitive".. Technical Report 85-022, Computer Science Department, University of North Carolina at Chapel Hill, January, 1985
^Rusinkiewicz, S. and Levoy, M. 2000. QSplat: a multiresolution point rendering system for large meshes. In Siggraph 2000. ACM, New York, NY, 343–352. DOI=http://doi.acm.org/10.1145/344779.344940