Movatterモバイル変換


[0]ホーム

URL:


CN111060006B - A viewpoint planning method based on three-dimensional model - Google Patents

A viewpoint planning method based on three-dimensional model
Download PDF

Info

Publication number
CN111060006B
CN111060006BCN201911229047.4ACN201911229047ACN111060006BCN 111060006 BCN111060006 BCN 111060006BCN 201911229047 ACN201911229047 ACN 201911229047ACN 111060006 BCN111060006 BCN 111060006B
Authority
CN
China
Prior art keywords
viewpoint
voxel
dimensional
dimensional model
scanning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911229047.4A
Other languages
Chinese (zh)
Other versions
CN111060006A (en
Inventor
陈海龙
刘晓利
彭翔
刘梦龙
张青松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Esun Display Co ltd
Shenzhen University
Original Assignee
Shenzhen Esun Display Co ltd
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Esun Display Co ltd, Shenzhen UniversityfiledCriticalShenzhen Esun Display Co ltd
Publication of CN111060006ApublicationCriticalpatent/CN111060006A/en
Application grantedgrantedCritical
Publication of CN111060006BpublicationCriticalpatent/CN111060006B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明提供一种基于三维模型的视点规划方法,包括步骤:利用所述三维模型的初始采样点sk找到体素vi,并以所述体素vi为种子进行搜索以得到有效体素集{vi};S2,利用标记函数g(sk)求解所述体素集{vi}中每个体素的标记分数的;S3,选择所述标记分数最大的体素进行视点计算;S4,将标记函数置0,并重复步骤S2~S4直至所有所述体素的标记分数低于阈值。本发明基于粗略三维模型在一定的约束条件下自动生成一系列扫描视点,在三维精细扫描中可以以最少的视点数量实现对完整物体的三维数字化彩色成像。

The present invention provides a viewpoint planning method based on a three-dimensional model, comprising the steps of: finding a voxel vi using an initial sampling pointsk of the three-dimensional model, and searching with the voxel vi as a seed to obtain a valid voxel set {vi }; S2, solving a labeling score of each voxel in the voxel set {vi } using a labeling function g(sk ) S3, select the voxel with the largest labeling score to calculate the viewpoint; S4, set the labeling function to 0, and repeat steps S2 to S4 until the labeling scores of all the voxels are lower than the threshold. The present invention automatically generates a series of scanning viewpoints based on a rough three-dimensional model under certain constraints, and can achieve three-dimensional digital color imaging of a complete object with a minimum number of viewpoints in three-dimensional fine scanning.

Description

Viewpoint planning method based on three-dimensional model
Technical Field
The invention belongs to the technical field of electronics, and particularly relates to a viewpoint planning method based on a three-dimensional model.
Background
Among the optical three-dimensional measurement technologies, the phase-based active binocular vision 3D imaging technology is considered as the most effective technology for accurately detecting and reconstructing the three-dimensional shape of an object due to the characteristics of non-contact, rapidness and high precision.
However, in the optical three-dimensional measurement and imaging process, the dimensional change and the topological change of an object to be measured can influence the whole three-dimensional measurement and imaging to different degrees due to the limitation of the measurement range of the three-dimensional sensor, and particularly, the method has great challenges for automatic scanning, namely, the method not only needs to meet the integrity requirement of three-dimensional scanning, but also needs to coordinate and control the posture relation between the three-dimensional sensor and the measurement surface so as to ensure the precision and the efficiency of three-dimensional digital measurement.
Aiming at the great challenges of the current three-dimensional measurement, the invention provides a viewpoint planning method based on a three-dimensional model, which provides precondition guarantee for high-precision three-dimensional color digital imaging.
Disclosure of Invention
In order to solve the above problems, the present invention provides a three-dimensional model-based viewpoint planning method, which is characterized by comprising the following steps:
S1, using an initial sampling point Sk of the three-dimensional model to find a voxel vi, and searching by taking the voxel vi as a seed to obtain an effective voxel set { vi };
S2, solving the marking score of each voxel in the voxel set { vi } by using a marking function g (Sk)Is a kind of device for the treatment of a cancer;
S3, selecting the voxel with the largest marking score to perform viewpoint calculation;
And S4, setting the marking function to 0, and repeating the steps S2-S4 until the marking scores of all the voxels are lower than a threshold value.
In one embodiment, the step S1 further includes:
For the initial sampling point sk, along its normal nk, the position of distance d0=(dn+df)/2, voxel vi can be found according to the following formula:
Wherein, deltaD refers to that the three-dimensional model performs 3D voxel grid division according to the distance interval of DeltaD, and (px,py,pz) refers to the coordinates of three-dimensional space points, (px-min,py-min,pz-min) is the minimum coordinate value of a space bounding box S, and vi=(nx,ny,nz) is the voxel number value.
In one embodiment, the searching in the step S1 is performed by performing an expanded search on the neighborhood voxels using a greedy algorithm, and is calculated according to a visibility constraint.
In one embodiment, the visibility constraint represents an angular range in which the measurement target point is allowed to be acquired, and assuming that the normal vector of the measurement target point pk is nk, the visibility constraint is:
Wherein the method comprises the steps ofRepresenting the maximum range of angles of visibility of the measurement target point.
In one embodiment, the marking function g (sk) is used to mark the usage of the sampling point sk, with a mark of 1 when sk has not been confirmed to belong to a certain scanning viewpoint, and with a mark of 0 when sk has been confirmed to belong to a certain scanning viewpoint.
In one embodiment, the viewpoint calculation includes calculating a spatial position of the viewpoint.
In one embodiment, the viewpoint calculation further comprises calculating a direction vector of the viewpoint.
In one embodiment, the calculation of the direction vector comprises the steps of:
Counting and selecting the loss d (vi,sk) of all sampling points sk by using histogram statistics;
converting the loss d (vi,sk) from a Cartesian coordinate system (x, y, z) to a spherical coordinate systemLower part;
Determining the size phifilter=φmax(1-ξmin of a filter window of an XY plane, traversing all elements (x, y) of the XY plane of a histogram by using the filter and summing the number of sampling points in the filter, when the statistics in the filter window are maximum, obtaining the average value of a vector d (vi,sk) of s 'k as the scanning viewpoint direction, wherein the sampling points in the filter are { s'k}k∈N, N is the number of s 'k of a marking weight g (s'k) =1
In one embodiment, the filter window is determined from the measurement space constraint φmax and the overlap constraint ζmin. The measurement space constraints include a field of view (FOV) constraint and a depth of field (DOF) constraint, provided that
Wherein phimax represents the maximum field angle of the three-dimensional sensor;
the overlapping degree constraint refers to that a certain field of view overlapping degree is required between adjacent scanning fields of view, and the field of view overlapping degree is defined asW and Wcover respectively represent the total area of the field of view and the area of the overlapping portion, provided that
ξ≥ξmin
Where ζmin is the minimum field of view overlap.
The invention has the beneficial effects that a global NBVs algorithm is provided, a series of scanning viewpoints can be automatically generated under a certain constraint condition based on a rough three-dimensional model, and three-dimensional digital color imaging of a complete object can be realized with the minimum number of viewpoints in three-dimensional fine scanning.
Drawings
FIG. 1 is a schematic diagram of a three-dimensional color digitizing system according to an embodiment of the invention.
FIG. 2 is a schematic diagram of system coordinate system distribution and transformation relationships according to one embodiment of the invention.
Fig. 3 is a low cost perspective target schematic based on non-coded marker points according to one embodiment of the invention.
FIG. 4 is a schematic representation of the constraint relationship of a binocular vision three dimensional sensor in accordance with one embodiment of the present invention.
Fig. 5 is a schematic diagram of an ISO point-of-view range (a) and voxel inclusion viewpoint (b) according to one embodiment of the present invention.
Fig. 6 is a statistical schematic diagram of intra-voxel vector histograms according to an embodiment of the invention.
FIG. 7 is a NBVs algorithm flow diagram according to one embodiment of the invention.
Detailed Description
The invention will now be described in further detail with reference to the following detailed description and with reference to the accompanying drawings, it being emphasized that the following description is merely exemplary in nature and is not intended to limit the scope of the invention and its application.
System description
FIG. 1 is a schematic diagram of a three-dimensional color digitizing system according to an embodiment of the invention. The system 10 includes a base 101, a robotic arm 102, an imaging module 103, a rotational axis 105, and a processor (not shown).
The base 101 is used for placing the object 104, and the base may not be necessary for the system, for example, may be other planes or structures.
The imaging module 103 includes a color three-dimensional sensor and a depth camera 1035, where the color three-dimensional sensor includes an active binocular vision camera composed of a left camera 1031, a right camera 1032, and a projector 1033, and a color camera 1034, which are respectively used to collect a first three-dimensional image and a color image of the object 104, and by obtaining relative position information (obtained by calibration) between the cameras, the first three-dimensional image and the color can be further aligned to obtain a three-dimensional color image of the object, or the color image collected by the color camera is subjected to texture mapping to realize coloring of the three-dimensional image so as to obtain the three-dimensional color image. In one embodiment, the left and right cameras 1031, 1032 are high resolution black and white cameras, and the projector may be a digital fringe projector for projecting coded structured light images, the left and right cameras 1031, 1032 capturing phase structured light images and performing high precision three-dimensional imaging based on phase-assisted active stereoscopic vision (PAAS) techniques. In one embodiment, the left and right cameras may also be infrared cameras, etc., and parameters of the left and right cameras, such as focal length, resolution, depth of field, etc., may or may not be the same. The first three-dimensional image is a three-dimensional image of the object 104 acquired by the color three-dimensional sensor.
The depth camera 1035 is used to acquire a second three-dimensional image of the object, and the depth camera 1035 may be a depth camera based on time of flight (TOF), structured light, or passive binocular vision technology, and typically, at least one of the resolution, precision, and frame rate of the acquired second three-dimensional image is lower than that of the first three-dimensional image, and typically, the resolution, precision, and frame rate of the second three-dimensional image are lower than that of the first three-dimensional image. For convenience of description, in the following description, a first three-dimensional image of an object is referred to as a high-precision fine three-dimensional model, and a second three-dimensional image of an object is referred to as a low-precision rough three-dimensional model. The second three-dimensional image refers to the three-dimensional image of the object 104 acquired by the depth camera 1035.
The mechanical arm 102 and the rotating shaft 105 form a pose adjustment module for fixing the imaging module 103 and adjusting the pose thereof. The mechanical arm 102 is connected with the imaging module 103 and the rotating shaft 105, the rotating shaft 105 is mounted on the base 101 and used for rotating around the base 101, the mechanical arm 102 is a multi-axis linkage mechanical arm to perform corresponding pose adjustment, and through joint adjustment of the rotating shaft 105 and the mechanical arm 102, multi-azimuth visual angle transformation can be performed on the imaging module 103, so that multi-azimuth measurement can be performed on the measured object 104. In some embodiments, the rotation shaft 105 includes a rotation motor, and the mechanical arm is driven by the rotation motor to rotate around the base under the drive of the rotation shaft, so as to measure the measured object.
The processor is connected to the mechanical arm 102, the imaging module 103, and the rotation axis 105, and is used for performing control and corresponding data processing or three-dimensional scanning tasks, such as three-dimensional color image extraction, rough three-dimensional model establishment, fine three-dimensional model establishment, and the like. It will be appreciated that the processor may be a single processor or may be multiple independent processors, such as multiple specialized processors may be included in the imaging module for performing algorithms such as three-dimensional imaging. The system further includes a memory for storing algorithm programs to be executed by the processor, such as the various algorithms, methods (calibration method, reconstruction method, viewpoint generation algorithm, scanning method, etc.) mentioned in the present invention, and the memory may be various computer readable media, such as non-transitory storage media, including magnetic media and optical media, for example, magnetic disk, tape, CDROM, RAM, ROM, etc.
It is to be understood that the three-dimensional image may refer to a depth image, or may refer to point cloud data, mesh data, three-dimensional model data, or the like obtained based on further processing of the depth image.
When the system 10 is used for three-dimensional scanning of the object 104, the whole scanning process is executed by the processor and is divided into the following steps:
The first step, calibrating the depth camera 1035 and the color three-dimensional sensor to obtain internal parameters and external parameters of the depth camera 1035 and the color three-dimensional sensor, wherein the specific process is described later;
A second step of acquiring a low-precision rough three-dimensional model of the object 104 by using the depth camera 103, for example, using the rotation axis 105 and the mechanical arm 102 to control the depth camera 1035 to surround the object 104 for one week to quickly generate a low-precision rough three-dimensional model of the object, it will be understood that the object 104 needs to be placed on the base 101 in advance, and in one embodiment, the object 104 is placed in the center of the base 101;
and thirdly, calculating and generating a global scanning viewpoint based on the low-precision rough three-dimensional model, and specifically automatically generating the global scanning viewpoint according to NBVs algorithm provided by the invention.
Fourthly, performing high-precision three-dimensional scanning on the detected object 104 by using an active binocular vision camera according to the generated global scanning viewpoint and the shortest path planning so as to obtain a first high-precision fine three-dimensional model;
in some embodiments, confidence map calculation is further needed to be performed on the first high-precision fine three-dimensional model, and the areas with missing data and missing details are determined and subjected to supplementary scanning so as to obtain a second high-precision fine three-dimensional model with higher precision;
in some embodiments, the color camera is synchronously utilized to collect color images in the process of collecting the first and/or second high-precision fine three-dimensional models, and the color images are subjected to texture mapping to realize the coloring of the fine three-dimensional models so as to obtain three-dimensional color digital images, and finally, the three-dimensional color digitization of the complete object with high fidelity is realized.
System calibration
Before the system 10 is used to perform three-dimensional scanning on the object 104, each component in the system needs to be calibrated to obtain a relative positional relationship between coordinate systems where each component is located, and corresponding operations, such as color coloring, global scanning viewpoint generation based on a rough three-dimensional model, and the like, can be performed based on the relative positional relationship.
FIG. 2 is a schematic diagram of system coordinate system distribution and transformation relationships according to one embodiment of the invention. Wherein the world coordinate system is established on the base coordinate system, the color three-dimensional sensor coordinate system is established on the left camera Sl, and the depth camera coordinate system is established on the infrared camera Si inside the depth camera. The internal and external parameters of the color three-dimensional sensor and the transformation matrix of the color three-dimensional sensor/depth camera coordinate system, the mechanical arm base coordinate system and the base coordinate system are required to be determined through system calibration. The difficulty of system calibration in the invention is that the system calibration method has sensors with different resolutions and different view field ranges (such as 2000 ten thousand pixels of a color camera, 500 ten thousand pixels of a left camera and a right camera, the FOV of a lens is H39.8 degrees, V27.6 degrees, 30 ten thousand pixels of a depth camera, the FOV is H58.4 degrees, V45.5 degrees) and sensors with different spectral response ranges (such as the spectral response ranges of the color camera and a black-and-white camera are in a visible light band, and the response range of an infrared camera is in an infrared band), and the calibration precision of the color three-dimensional sensor is ensured, so that the design and the manufacture of a high-precision three-dimensional target is a key for high-precision calibration.
Fig. 3 is a low cost perspective target schematic based on non-coded marker points according to one embodiment of the invention. The three-dimensional target consists of a first sub-target A and a second sub-target B, wherein the first sub-target A part consists of a plane, the surface of the plane is provided with non-coding mark points which are regularly arranged (such as 11 multiplied by 9), and the accurate space coordinates of the mark points can be determined by a beam adjustment technology. The marking points comprise datum points and positioning points, wherein the number of the point positions is at least four, and in order to improve the marking point extraction precision of the low-resolution depth camera, the datum points and the positioning points are designed in a big circle. The positioning point and the reference point are internally provided with a small black concentric mark point (for example, concentric circles), the positioning point and the reference point are distinguished through the circle center gray scale of the mark point (for example, the circle center gray scale is larger than 125 and the reference point is smaller than 125, namely, the circle center gray scale of the reference point is different from that of the positioning point), as shown in fig. 3 (c), the size of the reference point is greatly increased by the design, the positioning accuracy of the positioning point is improved, the second sub-target B consists of a plurality of planes, the surface of which is randomly stuck with non-coding mark points for rotating shaft calibration, and the non-coding mark points of the second sub-target B are designed in a small circle, wherein the small circle is relative to the size of the mark points in the first sub-target A, namely, the small circle is smaller than the mark points in the first sub-target A. The calibration process is to reconstruct the space coordinates of the random mark points under a plurality of view angles through the color three-dimensional sensor surrounding the three-dimensional target, and determine the base coordinate system through the mark point matching optimization, so that the space coordinates of the random mark points of the second sub-target B do not need to be determined in advance, and the difficulty and the cost of target manufacturing are greatly reduced.
The calibration process comprises two steps of (1) keeping a rotating shaft (a rotating motor) stationary, carrying a color three-dimensional sensor by a mechanical arm to collect a first sub-target A from multiple view angles, and calculating internal and external parameters, Hlm and Him of the color three-dimensional sensor. The left camera, the right camera, the color camera and the infrared camera work under the light sources of different frequency spectrum bands, in each acquisition, the left camera, the right camera and the color camera acquire target images under the illumination of visible light at first, then the infrared light source is used for illumination, the infrared camera acquires the target images, the mechanical arm keeps the posture unchanged, the motor rotates for different angles, the left camera and the right camera reconstruct the three-dimensional coordinates of random marking points of the target B part under each view angle by utilizing the binocular stereoscopic vision principle, the rotation angle is determined by matching the marking points, and therefore a base coordinate system is constructed, and Hba is calculated.
In one embodiment, when the color three-dimensional sensor is calibrated, three cameras (left, right and infrared cameras) respectively acquire target patterns under different visual angles at the same time, and an objective function of a single-camera calibration model is constructed:
Wherein the method comprises the steps ofRepresenting the spatially homogeneous coordinates of the jth marker point of the M marker points in the target coordinate system, xij (i=1,..n) representing the image coordinates of the jth marker point in the image acquired by the camera at the ith viewing angle, K being the internal matrix of the camera, including focal length, principal point position and tilt factor, epsilon being the lens distortion, only typical fifth-order lens distortion { Fryer,1986#114},Representing the transformation matrix of the target coordinate system to the camera coordinate system at the ith view angle.
In general, assuming that the left camera coordinate system is the three-dimensional sensor coordinate system, the structural parameters of the three cameras are:
Wherein,AndThe rotation matrix and translation vector of the left camera Sl to the right camera Sr respectively,AndA rotation matrix and a translation vector between the left camera Sl and the color camera Sc. To obtain higher-precision structural parameters, we add a transformation matrix to the nonlinear objective function of the three-phase camera, and the camera parameter estimation is realized by minimizing the objective function through Gauss-Newton or Levenberg-Marquardt methods:
where τ= { epsilonlrc,Kl,Kr,Kc,Hlr,Hlc },The internal and external parameters of the color three-dimensional sensor can be obtained. The parameter solutions for infrared cameras are similar.
After the calibration of the color three-dimensional sensor is completed, a transformation matrix of the left camera under each acquisition view angle can be obtainedThe following relation is established according to a mathematical model of hand-eye calibration, which is directly given by a mechanical arm control system:
Where i, k=1, 2,..n, and i noteq.k, N is the scanning times, and N motion gestures can be establishedAccording to Tsai method [30], Hsg and Hcb can be solved using a linear least squares solution.
In one embodiment, to further improve accuracy, we set up a nonlinear objective function with it as the initial value:
Wherein the method comprises the steps ofThe method can be obtained from the mechanical arm in real time, and a method of Levenberg-Marquardt is adopted to minimize an objective function, so that a higher-precision solution of Hlm and Hbt.Him is similar, and is not discussed.
In one embodiment, in the rotating shaft calibration process, the mechanical arm keeps unchanged posture, the transformation matrix from the mechanical arm to the base is recorded as H'gb, the three-dimensional sensor performs circular motion around the three-dimensional target, and random mark points of the target B part are reconstructed under different rotating anglesT is the number of rotations, j is the index number of the index point, and the index points reconstructed under all fields of viewAnd performing global matching optimization to obtain a transformation relation [ R(m)|T(m) ] of the target mark point under each rotation angle, then calculating a rotation axis direction vector under the constraint of the distance between every two closed circular track planes, and obtaining the center of each circular track by a global least squares optimization method, thereby determining a transformation relation Hrl of the three-dimensional sensor coordinate system to the base coordinate system. The base coordinate system and the base coordinate system Hbr can be obtained from the transformation relation (Hrl)-1=HbrH′mbHlm).
Global scanning viewpoint generation
According to a stereoscopic imaging model, the three-dimensional imaging system is limited by the included angle (FOV) of a binocular camera, the focal length and depth of field (DOF) of a camera lens and a digital projection lens, the measurement space of a three-dimensional sensor is limited, and the quality of a three-dimensional reconstructed point cloud is influenced by a plurality of constraint conditions. The constraint and viewpoint generation method are described below, respectively.
FIG. 4 is a schematic representation of the constraint relationship of a binocular vision three dimensional sensor in accordance with one embodiment of the present invention. Wherein fig. 4 (a) is a basic structure and a measurement space schematic diagram of the binocular sensor, fig. 4 (b) is a measurement space constraint of the three-dimensional sensor, and fig. 4 (c) is a point cloud visibility constraint. For simplicity of description, the invention does not develop description on calculation of a specific view volume, the measurement space is simplified to be shown in fig. 4 (b), the working distance range of the 3D sensor is set as [ Dn,df ], and the maximum view angle is setThe viewpoint position vi(x,y,z),vi (α, β, γ) represents a 3D sensor optical axis direction unit vector, vik=d(vi,sk represents a vector in which the viewpoint position vi points to the measurement target point position sk. The process of viewpoint planning is influenced by object surface space (object surface space), viewpoint space (viewpoint space) and imaging workspace (imaging work space), the constraints of which mainly include, but are not limited to, at least one of the following:
1) Visibility constraint, which is to represent the angle range of the measurement target point allowed to be acquired by the sensor, and to set the normal vector of the measurement target point pk as nk, then the visibility constraint condition is that
Wherein the method comprises the steps ofThe maximum range of angles of visibility of the measurement target point is represented as shown in fig. 4 (c).
2) Measuring spatial constraints, including field of view (FOV) constraints and depth of field (DOF) constraints, representing the measurable range of the three-dimensional sensor, subject to the constraints of
Where phimax denotes the maximum field angle of the three-dimensional sensor, as shown in fig. 4 (b).
3) Overlap (Overlap) constraint-for ICP matching and grid fusion (registration and integration) of subsequent multi-view depth data, a field of view overlap between adjacent scan fields is required. Defining the field of view overlap asW and Wcover respectively represent the total area of the field of view and the area of the overlapping portion, provided that
ξ≥ξmin (8)
Where ζmin is the minimum field of view overlap.
4) Occlusion (Occlusion) constrains that when a line segment d (vi,sk) from the viewpoint vi to the measurement target point sk intersects with the object entity (intersection), it means that the viewpoint vi is occluded in the viewpoint direction vik of the target point sk.
For an object of unknown shape, an initial scan is first performed around the object using a depth camera to generate a rough three-dimensional model (Rough model). The purpose of this step is to generate a global scan perspective using the model, so that the rough three-dimensional model does not require too high accuracy and resolution, nor does it require that the scan data be particularly complete. In addition, the depth camera generally has the characteristics of wide scanning visual angle, large depth distance range of a measurement space, good instantaneity and the like, so that for most objects with different sizes and different surface materials, a group of scanning postures can be simply preset to realize the initial scanning of the appearance of the object.
In one embodiment, the data is matched and integrated in real time during the initial scan using a matching and fusion algorithm, such as kinectFusion algorithm. After the initial scanning is completed, preprocessing such as noise filtering, smoothing, edge removing, normalization estimating and the like is carried out on the original point cloud, then an initial closed triangular mesh model is regenerated, poisson-disk sampling is carried out on the model to obtain so-called ISO points, and as shown in fig. 4 (b), the model sampling points are set as
Constructing a minimum bounding box S containing the model and the scanning space according to the initial model size and the maximum working distance Df of the scanner, and the space is divided into 3D voxel grids (e.g., into 100 x 100 voxels) at a distance interval Δd. For any spatial point (px,py,pz) in S, it can be quickly solved according to equation (9) which voxel grid the point belongs to.
Wherein (px-min,py-min,pz-min) is the minimum coordinate value of the bounding box S, vi=(nx,ny,nz) is the voxel number value, and the center point of the voxelWill participate as a spatial three-dimensional point in the Next Best View (NBVs) calculation below. The NBVs algorithm herein is largely divided into three steps:
Step1 for the initial model sample point sk, the position along its normal nk, at a distance d0=(dn+df)/2, the voxel vi can be found according to equation (9). Using vi as search seed, adopting greedy algorithm to perform expansion search on neighbor voxels, and recording voxel numbers satisfying formula (10) in the association set of sampling point sk according to the visibility constraintAs shown in fig. 5 (a).
Where vik=d(vi,sk) represents the vector of point vi to point sk, wik(vi,sk) =1 represents that sk is visible to vi, and when wik(vi,sk) =0 represents that there is an occlusion between sk to vi. At the time of recordingAt the same time, (sk,vik) is also recorded in the association set of all vi satisfying the formula (10)In, i.eStep1 is performed on all ISO points { sk } resulting in all valid voxels { vi } for which ISO points were recorded, while voxels not recorded were deemed invalid and no longer involved in the operation.
Step2 for valid voxel vi, according to its setThe labeling function g (sk) of the medium element sk, solving for the labeling score of the voxel
Use of g (sk) flag sk, 1 when sk has not been confirmed to belong to a certain scanning viewpoint, 0 when sk has been confirmed to belong to a certain scanning viewpoint, i.e
Step3, selecting the voxel with the largest marking value to perform viewpoint calculation. The ISO point of a voxel recorded is not necessarily covered by the same scan range as shown in FIG. 5 (b), so we apply histogram statistics toVector d (vi,sk) of all sk in (a) is counted and selected. According to Cartesian coordinate systems (x, y, z) and spherical coordinate systemsThe conversion relation of the vector d (vi,sk) is converted into a spherical coordinate system, and the X axis and the Y axis in the histogram are respectively theta and YThe Z-axis is the statistical number of iso-points, as shown in FIG. 6. Determining the size phifilter=φmax(1-ξmin of a filter window of an XY plane according to the scanning view angle constraint phimax and the overlapping degree constraint ximin of the three-dimensional sensor, traversing all elements (x, y) of the XY plane of the histogram by the filter and summing the number of iso points in the filter, when the statistics in the filter window are maximum, obtaining the average value of a vector d (vi,sk) of s 'k as the scanning view direction, wherein the iso points contained in the filter are { s'k}k∈N, N is the number of s 'k of the marking weight g (s'k) =1
Thus, the spatial position of the viewpoint and the viewpoint direction vector can be obtained
Step 4. Set the tag function g (s 'k) in { s'k } to 0.
Step2-Step4 are repeated until the labeling score for all voxels is below the threshold. The NBVs algorithm flow chart is shown in figure 7. From the above algorithm flow, it can be seen that the valid voxel contains all iso points satisfying the constraint condition, and the higher the labeling score of the voxel, the more the viewpoint calculated from the voxel can cover the surface area of the object, i.e. the more important the viewpoint. The view points are calculated by selecting the voxels with the largest marking scores, and the finally generated view point list is also ordered from more to less according to the number of iso points covered by the view points.
Automated three-dimensional scanning and supplemental scanning
The above NBVs algorithm is used to obtain the spatial position and direction of a series of viewpoints, how to realize all viewpoint scanning with the shortest path, and the problem of path planning is solved. The main algorithms for solving the path planning problem include an ant colony algorithm, a neural network algorithm, a particle swarm algorithm, a genetic algorithm, and the like, and each has advantages and disadvantages, for example, in one embodiment, the shortest path can be obtained by solving the point set by adopting the ant colony algorithm. And then three-dimensional scanning is carried out along the shortest path by utilizing a color three-dimensional sensor, high-precision depth data (a left camera coordinate system) acquired under each view angle is converted into a world coordinate system through a coordinate system transformation relation, and finally real-time matching of the multi-view angle depth data is realized so as to calculate a high-precision fine three-dimensional model of the object.
In one embodiment, the three-dimensional sensor performs three-dimensional scanning along the shortest path, and in each viewpoint transformation process, joint control of the rotating motor and the mechanical arm is involved, which is essentially a transformation problem of the coordinate system of each sensor. Two adjacent scanning viewpoints are set as Vi and Vj, and transformation matrixes of the two viewpointsRepresenting the transformation of the infrared depth sensor coordinate system from viewpoint Vi to Vj under the world coordinate system. In order to adjust the three-dimensional sensor from the viewpoint Vi to Vj, the projection coordinates of the viewpoints Vi and Vj in the rotation axis coordinate system (i.e., world coordinate system) to the XaYa plane are calculated, respectivelyAndThereby obtaining the rotation angle thetaij and the transformation matrix of the rotating motor
Viewpoint after motor rotationThe transformation matrix of viewpoint Vi' to VjDue to the transformation matrix Him and the transformation matrix of the manipulator at the viewpoint ViAs is known, the following transformation relationship can be established:
Thereby obtaining a transformation matrix of the mechanical arm at the viewpoint Vj. By combining the rotation angle thetaij of the rotating motor and the rotation matrix of the mechanical armThe posture adjustment of the three-dimensional sensor under different viewpoints can be realized. The high-precision depth data (in the left camera coordinate system) under each view angle is converted into the world coordinate system through a transformation matrix of the infrared depth sensor coordinate system and a transformation matrix of the view point, so that real-time matching of the multi-view angle depth data is realized.
As can be seen from equation (10), the viewpoint planning algorithm herein already considers the situation of self-shielding of the object, but in the actual scanning process, due to the influence of factors such as the material of the surface of the object, some data loss inevitably occurs, or the situation of low quality such as sparse point cloud data, and more importantly, the rough three-dimensional model for viewpoint planning loses the detail information of the object, so that the generated viewpoint does not consider the fine scanning of the geometric detail part.
To this end, in one embodiment, the original data missing portion and detail missing region will be embodied by a method of constructing a model confidence map, and the view point of the supplemental scan will be generated in conjunction with a view point planning algorithm. Poisson-disk sampling IS carried out on the original point cloud data acquired in the previous high-precision scanning stage to generate IS0 sampling pointsGenerating a confidence map of iso point sk according to equation (16):
f(sk)=fg(sk,nk)fs(sk,nk) (16)
Where fg(sk,nk)=Γ(sk)·nk is defined as the complete confidence score (completeness confidence score), Γ (sk) is the scalar field gradient at point sk, and nk is the normal vector. fg(sk,nk) is already obtained during the poisson-disk sampling process, thus no additional calculation is required, and fs(sk,nk) is a smoothed confidence score K (smoothness confidence score), which satisfies
Wherein g is l2 -norm,For the original point cloud within K neighborhood range Ωk of point sk,The spatial weight function θ (||sk-qj |) decays sharply with increasing radius within the range of Ωk, and the orthogonal weight function Φ (nk,qj-sk) reflects the distance from the original point qj within the K-neighborhood range Ωk to the tangent plane at the iso-point. When the smooth confidence score value is high, the local part at the surface point sk is smoother and the scanning quality is higher, and when the smooth confidence score value is low, the local original scanning data at the point sk is sparse, or the high-frequency component ratio of the original scanning data is more, such as point cloud noise or the high-frequency component ratio is rich in geometric details, and more supplementary scanning is needed.
The confidence score effectively reflects the quality and fidelity of the scanned model point cloud data, and the model confidence score is used for guiding the viewpoint planning of the supplementary scanning link. And setting a confidence score threshold epsilon, solving the range S ' = { S 'k|f(s′k) < epsilon } of the iso point of the missing part and the part rich in geometric details, and performing viewpoint calculation on S ' through the algorithm. Unlike the NBVs algorithm previously mentioned, g (s 'k) is assigned according to the confidence score of s'k
Thus, the voxel score is not the number of iso points, but is the sum of the iso point confidence scores, and performing viewpoint calculation on the voxel with the highest confidence score will make the viewpoint scan the missing part and the part rich in geometric details more heavily.
The foregoing is a further detailed description of the invention in connection with specific/preferred embodiments, and it is not intended that the invention be limited to such description. It will be apparent to those skilled in the art that several alternatives or modifications can be made to the described embodiments without departing from the spirit of the invention, and these alternatives or modifications should be considered to be within the scope of the invention.

Claims (6)

Translated fromChinese
1.一种基于三维模型的视点规划方法,其特征在于,包括以下步骤:1. A viewpoint planning method based on a three-dimensional model, characterized in that it comprises the following steps:S1,利用所述三维模型的初始采样点sk找到体素vi,并以所述体素vi为种子进行搜索以得到有效体素集{vi};S1, using the initial sampling pointsk of the three-dimensional model to find the voxelvi , and using the voxelvi as a seed to search to obtain a valid voxel set {vi };S2,利用标记函数g(sk)求解所述体素集{vi}中每个体素的标记分数所述标记函数g(sk)用于标记所述采样点sk的使用情况,当sk还未被确认为归属于某个扫描视点时,标记为1,已经被确认归属于某个扫描视点时,标记为0;S2, using the labeling function g(sk ) to solve the labeling score of each voxel in the voxel set {vi } The marking function g(sk ) is used to mark the usage of the sampling pointsk . Whensk has not been confirmed to belong to a certain scanning viewpoint, it is marked as 1, and when it has been confirmed to belong to a certain scanning viewpoint, it is marked as 0;S3,选择所述标记分数最大的体素进行视点计算;S3, selecting the voxel with the largest labeling score to perform viewpoint calculation;S4,将标记函数置0,并重复步骤S2~S4直至所有所述体素的标记分数低于阈值;S4, setting the labeling function to 0, and repeating steps S2 to S4 until the labeling scores of all the voxels are lower than the threshold;所述视点计算包括对所述视点的空间位置进行计算;The viewpoint calculation includes calculating the spatial position of the viewpoint;所述视点计算还包括对所述视点的方向向量进行计算;所述方向向量的计算包括以下步骤:The viewpoint calculation further includes calculating the direction vector of the viewpoint; the calculation of the direction vector includes the following steps:利用直方图统计对所有采样点sk的矢量d(vi,sk)进行统计和选择;The vectors d(vi , sk ) of all sampling points sk are counted and selected using histogram statistics;将所述矢量d(vi,sk)由笛卡尔坐标系(x,y,z)转换到球坐标系下;Convert the vector d(vi ,sk ) from the Cartesian coordinate system (x, y, z) to the spherical coordinate system Down;确定XY平面的滤波器窗口的大小Φfilter=Φmax(1-ξmin),并利用所述滤波器遍历直方图XY平面的所有元素(x,y)并求滤波器内采样点数量之和,当滤波窗口内统计量最大时,所述滤波器内所包含的采样点为{s′k}k∈N,N为标记权重g(s′k)=1的s′k的数量,对s′k的矢量d(vi,sk)求其均值作为所述扫描视点方向Determine the size of the filter window of the XY plane Φfiltermax (1-ξmin ), and use the filter to traverse all elements (x, y) of the histogram XY plane and calculate the sum of the number of sampling points in the filter. When the statistic in the filter window is the largest, the sampling points contained in the filter are {s′k }k∈N , where N is the number of s′k with a label weight g(s′k )=1, and calculate the mean of the vector d(vi ,sk ) of s′k as the scanning viewpoint direction其中,Φmax表示三维传感器最大视场角;ξmin为最小视场重叠度。Among them, Φmax represents the maximum field of view angle of the 3D sensor; ξmin is the minimum field of view overlap.2.根据权利要求1所述的基于三维模型的视点规划方法,其特征在于,所述步骤S1进一步包括:2. The viewpoint planning method based on a three-dimensional model according to claim 1, characterized in that the step S1 further comprises:对于所述初始采样点sk,沿其法向nk,距离为d0=(dn+df)/2的位置,根据下式可找到体素viFor the initial sampling pointsk , along its normal directionnk , at a distanced0 = (dn +df ) / 2, the voxelv1 can be found according to the following formula:其中,ΔD指的是所述三维模型按照ΔD的距离间隔进行3D体素网格划分;(px,py,pz)指的三维空间点的坐标,(px-min,py-min,pz-min)为空间包围盒S的最小坐标值,vi=(nx,ny,nz)为所述体素编号值。Among them, ΔD refers to the 3D voxel grid division of the three-dimensional model according to the distance interval of ΔD; (px ,py ,pz ) refers to the coordinates of the three-dimensional space point, (px-min ,py-min , pz-min ) is the minimum coordinate value of the space bounding box S, andvi = (nx ,ny ,nz ) is the voxel number value.3.根据权利要求1所述的基于三维模型的视点规划方法,其特征在于,所述步骤S1中的所述搜索是利用贪婪算法对邻域体素进行膨胀搜索,并根据可见性约束计算得到。3. The viewpoint planning method based on a three-dimensional model according to claim 1 is characterized in that the search in step S1 is performed by expanding the neighborhood voxels using a greedy algorithm and is calculated based on visibility constraints.4.根据权利要求3所述的基于三维模型的视点规划方法,其特征在于,所述可见性约束表示测量目标点允许被采集的角度范围,设测量目标点pk的法向量为nk,则所述可见性约束条件为:4. The viewpoint planning method based on a three-dimensional model according to claim 3, characterized in that the visibility constraint represents the angle range within which the measurement target point is allowed to be collected. Assuming that the normal vector of the measurement target point pk is nk , the visibility constraint condition is:其中,表示所述测量目标点的最大可视角度范围,vik=d(vi,sk)表示视点位置vi指向测量目标点位置sk的矢量。in, represents the maximum viewing angle range of the measurement target point, andvik =d(vi ,sk ) represents the vector from the viewpoint positionvi to the measurement target point positionsk .5.根据权利要求1所述的基于三维模型的视点规划方法,其特征在于,所述滤波器窗口是根据测量空间约束及重叠度约束来确定的。5. The viewpoint planning method based on a three-dimensional model according to claim 1 is characterized in that the filter window is determined based on measurement space constraints and overlap constraints.6.根据权利要求5所述的基于三维模型的视点规划方法,其特征在于,所述测量空间约束包括视场(FOV)约束和景深(DOF)约束,其约束条件为6. The viewpoint planning method based on a three-dimensional model according to claim 5 is characterized in that the measurement space constraints include field of view (FOV) constraints and depth of field (DOF) constraints, and the constraint conditions are:所述重叠度约束指的是相邻的扫描视场之间需要有一定视场重叠度,定义视场重叠度为W和Wcover分别表示视场总面积和重叠部分面积,其约束条件为ξ≥ξminThe overlap constraint means that there must be a certain overlap between adjacent scanning fields of view. The field overlap is defined as W and Wcover represent the total area of the field of view and the area of the overlapping part respectively, and the constraint condition is ξ ≥ ξmin .
CN201911229047.4A2019-04-152019-12-04 A viewpoint planning method based on three-dimensional modelActiveCN111060006B (en)

Applications Claiming Priority (2)

Application NumberPriority DateFiling DateTitle
CN20191030134342019-04-15
CN2019103013432019-04-15

Publications (2)

Publication NumberPublication Date
CN111060006A CN111060006A (en)2020-04-24
CN111060006Btrue CN111060006B (en)2024-12-13

Family

ID=70299747

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201911229047.4AActiveCN111060006B (en)2019-04-152019-12-04 A viewpoint planning method based on three-dimensional model

Country Status (1)

CountryLink
CN (1)CN111060006B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111351473B (en)*2020-04-272022-03-04华中科技大学无锡研究院 A robot-based viewpoint planning method, device and measurement system
CN111823231B (en)*2020-06-192021-07-02浙江大学 A method using a robotic arm for non-repeatable coverage tasks with minimal lifts
CN112258445B (en)*2020-08-212022-08-02西北工业大学 A Viewpoint Solution Method for False and Missing Installation of Aero-engine
CN112577447B (en)*2020-12-072022-03-22新拓三维技术(深圳)有限公司Three-dimensional full-automatic scanning system and method
CN113155054B (en)*2021-04-152023-04-11西安交通大学Automatic three-dimensional scanning planning method for surface structured light
CN113297691B (en)*2021-04-302022-04-08成都飞机工业(集团)有限责任公司Minimum bounding box size solving method based on plane traversal
CN119722949B (en)*2024-12-162025-10-03广州赛特智能科技有限公司Viewpoint optimization method, device, equipment and medium for three-dimensional reconstruction

Citations (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104063894A (en)*2014-06-132014-09-24中国科学院深圳先进技术研究院Point cloud three-dimensional model reestablishing method and system

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
DE112006003380T5 (en)*2005-12-162008-10-16Ihi Corporation Method and apparatus for the positional matching of three-dimensional shape data
WO2012071688A1 (en)*2010-12-032012-06-07中国科学院自动化研究所Method for analyzing 3d model shape based on perceptual information
CN104299261B (en)*2014-09-102017-01-25深圳大学Three-dimensional imaging method and system for human body
DE102015201271A1 (en)*2014-09-172016-03-17Friedrich-Alexander-Universität Erlangen - Nürnberg Method and system for determining the local quality of surface data extracted from volume data
CN110338844B (en)*2015-02-162022-04-19深圳迈瑞生物医疗电子股份有限公司Three-dimensional imaging data display processing method and three-dimensional ultrasonic imaging method and system
CN104881873B (en)*2015-06-032018-09-07浙江工业大学A kind of multistage adjustment sparse imaging method of mixed weighting for complicated fibre bundle Accurate Reconstruction
CN108765548A (en)*2018-04-252018-11-06安徽大学Three-dimensional scene real-time reconstruction method based on depth camera
CN108986048B (en)*2018-07-182020-04-28大连理工大学Three-dimensional point cloud rapid composite filtering processing method based on line laser scanning

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104063894A (en)*2014-06-132014-09-24中国科学院深圳先进技术研究院Point cloud three-dimensional model reestablishing method and system

Also Published As

Publication numberPublication date
CN111060006A (en)2020-04-24

Similar Documents

PublicationPublication DateTitle
CN110246186B (en)Automatic three-dimensional color imaging and measuring method
CN110243307B (en) An automated three-dimensional color imaging and measurement system
CN111060006B (en) A viewpoint planning method based on three-dimensional model
CN110230979B (en) A three-dimensional target and a three-dimensional color digital system calibration method thereof
US8213707B2 (en)System and method for 3D measurement and surface reconstruction
KR102447461B1 (en) Estimation of dimensions for confined spaces using a multidirectional camera
Kriegel et al.Efficient next-best-scan planning for autonomous 3D surface reconstruction of unknown objects
TWI555379B (en)An image calibrating, composing and depth rebuilding method of a panoramic fish-eye camera and a system thereof
JP6426968B2 (en) INFORMATION PROCESSING APPARATUS AND METHOD THEREOF
Herráez et al.3D modeling by means of videogrammetry and laser scanners for reverse engineering
CN113205603B (en) A 3D point cloud stitching and reconstruction method based on a rotating stage
WO2021140886A1 (en)Three-dimensional model generation method, information processing device, and program
US20170287166A1 (en)Camera calibration method using a calibration target
CN112396664A (en)Monocular camera and three-dimensional laser radar combined calibration and online optimization method
CN108053476B (en) A system and method for measuring human parameters based on segmented three-dimensional reconstruction
CN113345084B (en)Three-dimensional modeling system and three-dimensional modeling method
CN113884519B (en) Self-navigating X-ray imaging system and imaging method
JP4395689B2 (en) Image data processing method and modeling apparatus
JP2013178656A (en)Image processing device, image processing method, and image processing program
CN111127613A (en) 3D reconstruction method and system of image sequence based on scanning electron microscope
CN111583388A (en)Scanning method and device of three-dimensional scanning system
Owens et al.MSG-cal: Multi-sensor graph-based calibration
CN116563377A (en) A Martian Rock Measurement Method Based on Hemispherical Projection Model
CN111833392B (en) Marking point multi-angle scanning method, system and device
CN116242277A (en)Automatic measurement method for size of power supply cabinet structural member based on full-field three-dimensional vision

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp