Movatterモバイル変換


[0]ホーム

URL:


CN117824532A - Surface structured light three-dimensional measurement system and method - Google Patents

Surface structured light three-dimensional measurement system and method
Download PDF

Info

Publication number
CN117824532A
CN117824532ACN202410009447.9ACN202410009447ACN117824532ACN 117824532 ACN117824532 ACN 117824532ACN 202410009447 ACN202410009447 ACN 202410009447ACN 117824532 ACN117824532 ACN 117824532A
Authority
CN
China
Prior art keywords
calibration
pixel
image
grating image
phase
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410009447.9A
Other languages
Chinese (zh)
Inventor
何文韬
王辰
王晓南
孙繁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Zhongguan Automation Technology Co ltd
Original Assignee
Wuhan Zhongguan Automation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Zhongguan Automation Technology Co ltdfiledCriticalWuhan Zhongguan Automation Technology Co ltd
Priority to CN202410009447.9ApriorityCriticalpatent/CN117824532A/en
Publication of CN117824532ApublicationCriticalpatent/CN117824532A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

The application provides a surface structured light three-dimensional measurement system and method, and relates to the technical field of computers. The projector is used for projecting a grating image meeting preset requirements to an object to be measured according to the measurement instruction sent by the control unit; the first image acquisition device and the second image acquisition device are used for respectively shooting a first projection grating image and a second projection grating image according to shooting instructions sent by the control unit and sending the first projection grating image and the second projection grating image to the control unit; the control unit is used for determining a first three-dimensional coordinate corresponding to each first pixel in the first projection grating image and a second three-dimensional coordinate corresponding to each second pixel in the second projection grating image based on a preset calibration function, and acquiring three-dimensional characteristic parameters of an object to be measured according to the first three-dimensional coordinate and the second three-dimensional coordinate, so that when the phenomenon of image overexposure exists in the first projection grating image or the second projection grating image, the measurement data of the overexposure area can be complemented according to the preset calibration function, and further more accurate three-dimensional characteristic parameters are obtained.

Description

Surface structured light three-dimensional measurement system and method
Technical Field
The application relates to the technical field of computers, in particular to a surface structured light three-dimensional measurement system and method.
Background
The surface structure light three-dimensional measurement technology is an advanced non-contact optical measurement method, which comprises the steps of projecting specific light pattern onto the surface of an object to be measured, capturing the light pattern modulated by the surface of the object by using a camera, and obtaining the three-dimensional morphological information of the object by analyzing the deformation of the patterns.
In the prior art, when three-dimensional form information of an object is acquired based on a surface structured light three-dimensional measurement technology, a pattern projected onto the surface of the object to be measured by a projector is required to be clearly shot by a left camera and a right camera at the same time.
It can be seen that the existing measurement method has high requirement on environment, so when the surface structured light three-dimensional measurement technology is used for obtaining the three-dimensional morphology measured by the highly reflective object, the overexposure area is easy to appear in the camera image due to the existence of the strong light source in the environment light and the specular reflection characteristic of the highly reflective object, and further the measurement data of the overexposure area is lost.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provide a surface structured light three-dimensional measurement system and a surface structured light three-dimensional measurement method, which can complement measurement data of an overexposed area so as to obtain more accurate three-dimensional characteristic parameters.
In order to achieve the above purpose, the technical solution adopted in the embodiment of the present application is as follows:
in a first aspect, the present invention provides a surface structured light three-dimensional measurement system comprising: the system comprises a projector, a first image acquisition device, a second image acquisition device and a control unit, wherein the projector, the first image acquisition device and the second image acquisition device are respectively and electrically connected with the control unit, and the projector is positioned between the first image acquisition device and the second image acquisition device and is consistent with the depth direction in orientation;
the projector is used for projecting a grating image meeting preset requirements to an object to be measured according to the measurement instruction sent by the control unit;
the first image acquisition device and the second image acquisition device are used for respectively shooting a first projection grating image and a second projection grating image of the object to be detected according to shooting instructions sent by the control unit, and sending the first projection grating image and the second projection grating image to the control unit;
the control unit is used for determining a first three-dimensional coordinate corresponding to each first pixel in the first projection grating image and a second three-dimensional coordinate corresponding to each second pixel in the second projection grating image based on a preset calibration function; according to the first three-dimensional coordinate and the second three-dimensional coordinate, three-dimensional characteristic parameters of an object to be measured are obtained, wherein the preset calibration function comprises: the first calibration function is used for representing the functional relation between the depth value and the phase value corresponding to each first pixel in the first image acquisition equipment, and the second calibration function is used for representing the functional relation between the depth value and the phase value corresponding to each second pixel in the second image acquisition equipment.
In an optional embodiment, the projector is further configured to project a calibration raster image meeting a preset requirement to a preset plane according to a calibration instruction sent by the control unit;
the control unit is also used for receiving a first calibration projection grating image and a second calibration projection grating image which are sent by the first image acquisition equipment and the second image acquisition equipment;
calculating a calibration depth value and a calibration phase value corresponding to each first calibration pixel in the first calibration projection grating image, and fitting to construct the first calibration function;
and calculating a calibration depth value and a calibration phase value corresponding to each second calibration pixel in the second calibration projection grating image, and fitting to construct the second calibration function.
In an optional implementation manner, the calculating the calibration depth value and the calibration phase value corresponding to each first calibration pixel in the first calibration projection grating image, and fitting to construct the first calibration function includes:
determining a first calibration phase map according to the gray value of each first calibration pixel in the first calibration projection grating image;
determining a second calibration phase map according to the gray value of each second calibration pixel in the second calibration projection grating image;
Performing three-dimensional correction on the first calibration phase map and the second calibration phase map so that the ordinate of corresponding calibration pixels of the first calibration projection grating image and the second calibration grating image in the horizontal direction are the same;
searching a second calibration pixel matched with each first calibration pixel in the first calibration projection grating image in the second calibration projection grating image to obtain a matched calibration pixel pair;
and fitting and constructing the first calibration function according to each first calibration pixel in the matched calibration pixel pair.
In an optional implementation manner, the searching the second calibration pixels in the second calibration projection grating image, which are matched with each first calibration pixel in the first calibration projection grating image, to obtain a matched calibration pixel pair includes:
determining whether a phase value corresponding to the first calibration pixel is valid based on a preset constraint function, wherein the preset constraint function comprises: modulating a gray scale constraint function and an average gray scale deviation constraint function;
if the first calibration pixels are effective, second calibration pixels matched with the first calibration pixels in the first calibration projection grating image are searched in the second calibration projection grating image, and matched calibration pixel pairs are obtained.
In an optional implementation manner, the control unit is specifically configured to determine, based on a first calibration function, a first depth value corresponding to each first pixel according to a gray value of each first pixel in the first projection raster image;
determining a second depth value corresponding to each second pixel according to the gray value of each second pixel in the second projection grating image based on the second calibration function;
and respectively calculating a first three-dimensional coordinate corresponding to the first pixel and a first three-dimensional coordinate corresponding to the second pixel according to a pinhole imaging principle based on the first depth value and the second depth value.
In an optional embodiment, the control unit is specifically configured to determine a first phase map according to a gray value of each first pixel in the first projection raster image;
determining a second phase map according to the gray value of each second pixel in the second projection raster image;
performing stereo correction on the first phase map and the second phase map so that the ordinate coordinates of corresponding pixels of the first projection grating image and the second grating image in the horizontal direction are the same;
searching whether second pixels matched with first pixels in the first projection grating image exist in the second projection grating image or not;
If the first depth value exists, based on a triangulation algorithm, respectively calculating the first depth value corresponding to each first pixel in the matching point pair;
if the first calibration function does not exist, and the phase value corresponding to the first pixel is determined to be effective, a first depth value corresponding to the first pixel is determined based on the first calibration function.
In an alternative embodiment, the determining the first phase map according to the gray value of each first pixel in the first projection raster image includes:
determining a first phase principal value function according to the gray value of each first pixel in the first projection grating image;
based on the multi-frequency heterodyne principle, converting the first phase main value function to obtain a first absolute phase function;
the first phase map is determined from the first absolute phase function.
In an optional embodiment, the projector is specifically configured to project three groups of grating images to an object to be measured according to a measurement instruction sent by the control unit, where each group of grating images has different frequencies, and each group of grating images includes a first sub-grating image, a second sub-grating image, a third sub-grating image, and a fourth sub-grating image that have different phase shifts;
the control unit is further used for calculating the modulation gray corresponding to each first pixel according to the gray value of each sub-grating image in the first pixel;
And if the modulation gray scale corresponding to the first pixel is smaller than a preset modulation gray scale threshold value, the phase value corresponding to the first pixel is invalid.
In an optional embodiment, the control unit is further configured to calculate, according to the gray value of each sub-grating image on the first pixel, an average gray value deviation corresponding to the first pixel; and if the average gray value deviation corresponding to the first pixel is larger than the preset average gray value deviation threshold, determining that the phase value corresponding to the first pixel is invalid.
In a second aspect, the present invention provides a surface structured light three-dimensional measurement method, which is applied to a control unit in a surface structured light three-dimensional measurement system, the surface structured light three-dimensional measurement system including: the system comprises a projector, a first image acquisition device, a second image acquisition device and a control unit, wherein the projector, the first image acquisition device and the second image acquisition device are respectively and electrically connected with the control unit, the projector is positioned between the first image acquisition device and the second image acquisition device, and the orientation is consistent with the depth direction, and the method comprises the following steps:
receiving a first projection grating image and a second projection grating image sent by a first image acquisition device and a second image acquisition device;
Determining a first three-dimensional coordinate corresponding to each first pixel in the first projection grating image and a second three-dimensional coordinate corresponding to each second pixel in the second projection grating image based on a preset calibration function;
according to the first three-dimensional coordinate and the second three-dimensional coordinate, three-dimensional characteristic parameters of an object to be measured are obtained, wherein the preset calibration function comprises: the first calibration function is used for representing the functional relation between the depth value and the phase value corresponding to each first pixel in the first image acquisition equipment, and the second calibration function is used for representing the functional relation between the depth value and the phase value corresponding to each second pixel in the second image acquisition equipment.
The beneficial effects of this application are:
in the surface structured light three-dimensional measurement system and method provided by the embodiment of the application, the method comprises the following steps: the system comprises a projector, a first image acquisition device, a second image acquisition device and a control unit, wherein the projector, the first image acquisition device and the second image acquisition device are respectively and electrically connected with the control unit, and the projector is positioned between the first image acquisition device and the second image acquisition device and is consistent with the depth direction in orientation; the projector is used for projecting a grating image meeting preset requirements to an object to be measured according to the measurement instruction sent by the control unit; the first image acquisition device and the second image acquisition device are used for respectively shooting a first projection grating image and a second projection grating image of an object to be measured according to shooting instructions sent by the control unit, and sending the first projection grating image and the second projection grating image to the control unit; the control unit is used for determining a first three-dimensional coordinate corresponding to each first pixel in the first projection grating image and a second three-dimensional coordinate corresponding to each second pixel in the second projection grating image based on a preset calibration function; according to the first three-dimensional coordinate and the second three-dimensional coordinate, three-dimensional characteristic parameters of the object to be measured are obtained, wherein the preset calibration function comprises: the first calibration function is used for representing the functional relation between the depth value and the phase value corresponding to each first pixel in the first image acquisition device, and the second calibration function is used for representing the functional relation between the depth value and the phase value corresponding to each second pixel in the second image acquisition device.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered limiting the scope, and that other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a three-dimensional measurement system for structured light;
FIG. 2 is a schematic diagram of a three-dimensional measurement flow of surface structured light provided in the present application;
FIG. 3 is a schematic diagram of another three-dimensional measurement procedure of surface structured light provided in the present application;
FIG. 4 is a schematic diagram of another three-dimensional measurement procedure of surface structured light provided in the present application;
FIG. 5 is a schematic diagram of a triangulation algorithm according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a phase principal value function provided herein;
FIG. 7 is a schematic diagram of another three-dimensional measurement procedure of surface structured light provided in the present application;
fig. 8 is a waveform schematic diagram of a first calibration absolute phase function according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
Fig. 1 is a schematic diagram of a three-dimensional measurement system for surface structured light provided in the present application. As shown in fig. 1, the surface structured light three-dimensional measurement system may include: the projector 110, the first image acquisition device 120, the second image acquisition device 130 and the control unit 140, wherein the projector 110, the first image acquisition device 120 and the second image acquisition device 130 are respectively and electrically connected with the control unit 140, and the projector 110 is positioned between the first image acquisition device 120 and the second image acquisition device 130 and is oriented in accordance with the depth direction. Wherein the depth direction of the projector may characterize the projection capability of the projector in the vertical direction, i.e. the depth of the image that the projector is able to project.
Alternatively, the projector may be a Digital Light Processing (DLP) projector, the first image capturing device may be a left camera, hereinafter referred to as a first camera, and the second image capturing device may be a right camera, hereinafter referred to as a second camera, which are horizontally disposed with their optical axes at an angle, and the DLP projector is located at the center of the left and right cameras and oriented in line with the depth direction.
In some embodiments, calibration of the internal and external parameters of the first and second image acquisition devices may be done in advance prior to measurement. Wherein the internal parameters may include: the focal length of the camera, principal point (intersection coordinates of the optical axis and the image plane), lens distortion parameters, external parameters may include: a rotation matrix and a translation vector converted between the camera coordinate system and the world coordinate system.
The projector is used for projecting a grating image meeting preset requirements to an object to be measured according to the measurement instruction sent by the control unit; the first image acquisition device and the second image acquisition device are used for respectively shooting a first projection grating image and a second projection grating image of the object to be measured according to shooting instructions sent by the control unit, and sending the first projection grating image and the second projection grating image to the control unit.
The object to be measured may be any object that needs to acquire three-dimensional characteristic parameters, alternatively, may be a metal object with high reflection characteristics, for example, may be a high reflection metal part, which is not limited herein.
Alternatively, the measurement instructions may be sent by the control unit to the projector, or alternatively, may be sent by other external devices, not limited herein. In some embodiments, the measurement instructions may carry projection parameters, for example, including, but not limited to, the number, frequency, phase, etc. of the raster images.
The control unit is used for determining a first three-dimensional coordinate corresponding to each first pixel in the first projection grating image and a second three-dimensional coordinate corresponding to each second pixel in the second projection grating image based on a preset calibration function; and acquiring three-dimensional characteristic parameters of the object to be measured according to the first three-dimensional coordinates and the second three-dimensional coordinates.
The preset calibration function comprises the following steps: the first calibration function is used for representing the functional relation between the depth value and the phase value corresponding to each first pixel in the first image acquisition equipment, and the second calibration function is used for representing the functional relation between the depth value and the phase value corresponding to each second pixel in the second image acquisition equipment.
For the first projection grating image, if the first projection grating image has the phenomenon of partial image overexposure, the control unit can substitute the phase value of a first pixel of an overexposed region in the first projection grating image into the first calibration function, and solve the first depth value corresponding to the first pixel; according to the first depth value, further obtaining a first three-dimensional coordinate corresponding to the first pixel; for the second projection grating image, if the second projection grating image has the phenomenon of partial image overexposure, the control unit can substitute the phase value of the second pixel of the overexposure region in the second projection grating image into the second calibration function, and solve the second depth value corresponding to the second pixel; and further obtaining a second three-dimensional coordinate corresponding to the second pixel according to the second depth value.
And respectively obtaining a first three-dimensional coordinate corresponding to each first pixel and a second three-dimensional coordinate corresponding to each second pixel, namely obtaining point cloud data about the object to be measured, and then obtaining three-dimensional characteristic parameters of the object to be measured according to the point cloud data.
Alternatively, the three-dimensional characteristic parameter may include a three-dimensional shape, a three-dimensional size, a three-dimensional texture, and the like, which are not limited herein.
In summary, an embodiment of the present application provides a surface structured light three-dimensional measurement system, including: the system comprises a projector, a first image acquisition device, a second image acquisition device and a control unit, wherein the projector, the first image acquisition device and the second image acquisition device are respectively and electrically connected with the control unit, and the projector is positioned between the first image acquisition device and the second image acquisition device and is consistent with the depth direction in orientation; the projector is used for projecting a grating image meeting preset requirements to an object to be measured according to the measurement instruction sent by the control unit; the first image acquisition device and the second image acquisition device are used for respectively shooting a first projection grating image and a second projection grating image of an object to be measured according to shooting instructions sent by the control unit, and sending the first projection grating image and the second projection grating image to the control unit; the control unit is used for determining a first three-dimensional coordinate corresponding to each first pixel in the first projection grating image and a second three-dimensional coordinate corresponding to each second pixel in the second projection grating image based on a preset calibration function; according to the first three-dimensional coordinate and the second three-dimensional coordinate, three-dimensional characteristic parameters of the object to be measured are obtained, wherein the preset calibration function comprises: the first calibration function is used for representing the functional relation between the depth value and the phase value corresponding to each first pixel in the first image acquisition device, and the second calibration function is used for representing the functional relation between the depth value and the phase value corresponding to each second pixel in the second image acquisition device.
In an alternative embodiment, the projector is further configured to project a calibration raster image meeting a preset requirement to a preset plane according to a calibration instruction sent by the control unit.
The control unit is also used for receiving the first calibration projection grating image and the second calibration projection grating image which are sent by the first image acquisition equipment and the second image acquisition equipment; calculating a calibration depth value and a calibration phase value corresponding to each first calibration pixel in the first calibration projection grating image, and fitting to construct a first calibration function; and calculating a calibration depth value and a calibration phase value corresponding to each second calibration pixel in the second calibration projection grating image, and fitting to construct a second calibration function.
Alternatively, the calibration instructions may carry projection parameters, for example, including, but not limited to, the number, frequency, phase, etc. of the grating images. Optionally, after the control unit sends the calibration instruction to the projector, the control unit may also send the calibration shooting instruction to the first image capturing device and the second image capturing device synchronously to instruct shooting.
The normal direction and the depth direction of the preset plane can be consistent, and the DLP projector can firstly project a calibration grating image to the preset plane according to a calibration instruction sent by the control unit and synchronously trigger the first image acquisition device and the second image acquisition device to shoot images so as to respectively obtain a first calibration projection grating image and a second calibration projection grating image.
After the control unit acquires the first calibration projection grating image and the second calibration projection grating image, the control unit can calculate a corresponding calibration depth value and a calibration phase value according to the gray value of each first calibration pixel in the first calibration projection grating image, and fit a first calibration function according to the gray value; and calculating a corresponding calibration depth value and a calibration phase value according to the gray value of each second calibration pixel in the second calibration projection grating image, and fitting a second calibration function according to the calculated calibration depth value and the calibration phase value.
Fig. 2 is a schematic diagram of a three-dimensional measurement flow of surface structured light provided in the present application. In an alternative embodiment, as shown in fig. 2, the calculating the calibration depth value and the calibration phase value corresponding to each first calibration pixel in the first calibration projection grating image, and fitting to construct a first calibration function includes:
s101, determining a first calibration phase diagram according to gray values of all first calibration pixels in the first calibration projection grating image.
S102, determining a second calibration phase diagram according to the gray value of each second calibration pixel in the second calibration projection grating image.
Optionally, taking the first calibration phase diagram as an example, after the gray value of each first calibration pixel is obtained, the gray value may be used to calculate a first calibration phase main value, the first calibration phase main value is converted to obtain a first calibration absolute phase value, and the first calibration absolute phase values are stored in a matrix to obtain the first calibration phase diagram.
The determination of the second calibration phase map may refer to the determination manner of the first calibration phase map, which is not described herein.
S103, carrying out three-dimensional correction on the first calibration phase diagram and the second calibration phase diagram so that the ordinate of corresponding calibration pixels of the first calibration projection grating image and the second calibration grating image in the horizontal direction are the same.
The matching efficiency can be quickened through the three-dimensional correction operation, and the calibration efficiency can be further improved. When the three-dimensional correction is carried out, the epipolar geometry principle can be seen, so that a certain first calibration pixel p in the first calibration projection grating image1 In the first placeCorresponding second calibration pixel p on two calibration projection grating images2 A polar line l positioned at the pixel2 On the other hand, the polar equation can be based on the measured system parameters and the pixel p1 The purpose of the stereo correction is to make the epipolar lines on the first and second calibrated projection raster images horizontal and correspond to any first calibrated pixel p1 And a second calibration pixel p2 The ordinate of (c) is the same.
In addition, it should be noted that, in the stereo correction process, a de-distortion operation may be performed, where the de-distortion operation may be implemented by calculating a pixel coordinate correspondence between before and after de-distortion, so as to eliminate errors caused by lens distortion of the first image capturing device and the second image capturing device.
S104, searching second calibration pixels matched with all the first calibration pixels in the first calibration projection grating image in the second calibration projection grating image, and obtaining matched calibration pixel pairs.
After the stereo correction is completed, since the ordinate of the corresponding calibration pixels of the first calibration projection grating image and the second calibration grating image in the horizontal direction are the same, the first calibration pixel p on the first calibration projection grating image1 Only one row of the second calibration projection grating image with the same ordinate as the first calibration pixel p is needed to search the corresponding second calibration pixel p2 And (3) obtaining the product.
Alternatively, a binary search may be used during the search process to search the second calibration projection raster image for a second calibration pixel that matches each first calibration pixel in the first calibration projection raster image, and in some embodiments, the phase difference between the first calibration pixel and the second calibration pixel is less than the predetermined difference, which may be considered a matched calibration pixel pair.
In some embodiments, to improve the matching accuracy, the coordinates of the second calibration pixels may be interpolated based on an interpolation operation, for example, linear interpolation to obtain sub-pixel abscissas, and the calibration pixels matched with the first calibration pixels are searched for in the second calibration pixels corresponding to each sub-pixel abscissas.
S105, fitting and constructing a first calibration function according to each first calibration pixel in the matched calibration pixel pair.
Alternatively, the first calibration function may be expressed as:where n is the order of the polynomial, which may be a tested value, ai Coefficients of polynomial +.>For normalizing the phase values, i.e.)>Representing the phase value, f represents the number of cycles of the absolute phase function ψ.
Wherein a isi The method can be obtained by using a least square method according to the depth value corresponding to the n+1 first calibration pixels and the corresponding phase value. Alternatively, the obtained polynomial coefficients may be stored in the form of a three-dimensional matrix, and the size of the matrix may be w×h×n+1, where w is the width of the first calibration projection grating image and h is the height of the first calibration projection grating image.
It should be noted that, for the second calibration function, the depth value and the corresponding phase value corresponding to each second calibration pixel in the matched calibration pixel pair may be obtained by fitting, and the specific fitting process may refer to the description of the first calibration function, which is not described herein.
In an alternative embodiment, the projector is specifically configured to project three sets of calibration grating images to the object to be measured according to the calibration instruction sent by the control unit, where each set of calibration grating images has different frequencies, and each set of calibration grating images includes a first sub-calibration projection grating image, a second sub-calibration projection grating image, a third sub-calibration projection grating image, and a fourth sub-calibration projection grating image that have different phase shifts.
The three sets of calibration grating images, that is, the three-frequency four-step phase shift calibration grating images, optionally, the frequencies of the three sets of calibration grating images may be 59, 64, and 70, respectively, which, of course, is not limited to the specific values.
Alternatively, each set of calibration grating images may include 4 first, second, third, and fourth sub-calibration projection grating images having different phase shifts.
Taking a certain set of calibration grating images as an example for illustration, at each first calibration pixel, the gray value of each sub-calibration projection grating image may be expressed as:
I1 (x,y)=I′(x,y)+I″(x,y)cos[φ(x,y)]
I2 (x,y)=I′(x,y)+I″(x,y)cos[φ(x,y)+π/2]
I3 (x,y)=I′(x,y)+I″(x,y)cos[φ(x,y)+π]
I4 (x,y)=I′(x,y)+I″(x,y)cos[φ(x,y)+3π/2]
wherein I is1 (x,y)、I2 (x,y)、I3 (x,y)、I4 (x, y) represents the gray values of the first sub-calibration projection grating image, the second sub-calibration projection grating image, the third sub-calibration projection grating image and the fourth sub-calibration projection grating image at the first calibration pixel (x, y), I' (x, y) represents the average gray corresponding to the first calibration pixel (x, y), I "(x, y) represents the modulation gray corresponding to the first pixel (x, y), and phi (x, y) represents the phase principal value at the first calibration pixel (x, y), respectively.
From the above formula, the gray values corresponding to the first calibration pixels are in sinusoidal distribution, the phase shift of the second sub-calibration projection grating image relative to the first sub-calibration projection grating image is pi/2, the phase shift of the third sub-calibration projection grating image relative to the first sub-calibration projection grating image is pi, and the phase shift of the fourth sub-calibration projection grating image relative to the first sub-calibration projection grating image is 3 pi/2. Of course, the phase shift between the sub-calibration projection grating images is not limited thereto.
Fig. 3 is a schematic diagram of another three-dimensional measurement procedure of surface structured light provided in the present application. In an alternative embodiment, as shown in fig. 3, searching the second calibration pixels in the second calibration projection grating image, where the second calibration pixels are matched with each first calibration pixel in the first calibration projection grating image, to obtain a matched calibration pixel pair includes:
s201, determining whether a phase value corresponding to a first calibration pixel is valid or not based on a preset constraint function, wherein the preset constraint function comprises: a modulation gray scale constraint function and an average gray scale deviation constraint function.
The gray value calculation formula is unfolded, and then the gray value can be obtained:
I1 (x,y)=I′(x,y)+I″(x,y)cos[φ]
I2 (x,y)=I′(x,y)-I″(x,y)sin[φ(x,y)]
I3 (x,y)=I′(x,y)-I″(x,y)cos[φ(x,y)]
I4 (x,y)=I′(x,y)+I″(x,y)sin[φ(x,y)]
thus, the modulation gray scale can be expressed as:
wherein, the modulation gray constraint function is used for indicating that if the modulation gray I' (x, y) is smaller than the preset modulation gray threshold tau1 The phase value at the first calibration pixel (x, y) may be considered invalid, i.e. the area where it is located may be the area where the calibration grating pattern is not projected.
In addition, the average gray value deviation I' (x, y) may be calculated according to the gray value calculation formula to identify the overexposed region. Wherein the average gray scale deviation constraint function is used for indicating that if the overexposure phenomenon does not exist at the first calibration pixel (x, y), the average gray scale deviation constraint function is (I)1 (x,y)+I3 (x, y))/2 and (I2 (x,y)+I4 The values of (x, y))/2 are approximately equal, at which time the average gray value deviation is approximately equal to 0; if (x, y) is located in the overexposed region, one or more of the four images will have a gray level of 255 (I)1 (x,y)+I3 (x, y))/2 and (I2 (x,y)+I4 (x, y))/2 will have a certain difference when the average gray value deviation is greater thanPresetting an average gray level deviation threshold tau2 The phase value at that point is also considered invalid.
Based on the above description, it can be seen that, for the first calibration pixel, the corresponding modulation gray scale value and average gray scale value deviation can be calculated according to the modulation gray scale calculation function and the average gray scale value deviation calculation function, respectively, and compared with the modulation gray scale threshold τ1 Presetting an average gray value deviation threshold tau2 According to the comparison result, determining whether the phase value at each first calibration pixel is valid.
S202, if the first calibration pixels are effective, searching second calibration pixels which are matched with the first calibration pixels in the first calibration projection grating image in the second calibration projection grating image, and obtaining matched calibration pixel pairs.
If the determination is valid, the step S104 may be referred to for the matching operation, and the matched calibration pixel pair is obtained, which is not described herein.
In an optional implementation manner, the control unit is specifically configured to determine, based on a first calibration function, a first depth value corresponding to each first pixel according to a gray value of each first pixel in the first projection raster image; determining a second depth value corresponding to each second pixel according to the gray value of each second pixel in the second projection grating image based on the second calibration function; based on the first depth value and the second depth value, a first three-dimensional coordinate corresponding to the first pixel and a first three-dimensional coordinate corresponding to the second pixel are calculated respectively through a pinhole imaging principle.
Taking the first depth value corresponding to the first pixel as an example for illustration, when the control unit calculates the first depth value corresponding to the first pixel, the control unit may determine the phase value according to the gray value of each first pixel, and then calculate the first depth value corresponding to the first pixel according to the phase value and the first calibration function.
According to the first depth value corresponding to the first pixel and the pinhole imaging principle, a first three-dimensional coordinate corresponding to each first pixel can be calculated. Wherein, the first pixel coordinate on the first projection raster image is recorded as (u)1 ,v1 ) Having a depth value d1 Then its corresponding first three-dimensional coordinate (X1 ,Y1 ,Z1 ) The method comprises the following steps:
Z1 =d1
X1 =Z1 *(u1 -cx1 )/fl
Y1 =Z1 *(v1 -cy1 )/fl
wherein cy is1 Is the ordinate, cx of the principal point of the first camera after stereo correction1 And fl is the focal length of the first camera after the stereo correction.
Note that, if the second pixel coordinate on the second projection raster image is (u)2 ,v2 ) Having a depth value d2 Then its corresponding second three-dimensional coordinate (X2 ,Y2 ,Z2 ) The method comprises the following steps:
Z2 =d2
X2 =Z2 *(u2 -cx2 )/fr+B
Y2 =Z2 *(v2 -cy2 )/fr
wherein cy is2 Is the ordinate, cx of the principal point of the second camera after stereo correction2 The main point abscissa of the second camera after the stereo correction is adopted, and fr is the focal length of the second camera after the stereo correction; it should be noted that, the calculation formula of the second three-dimensional coordinate increases the baseline length B, because the coordinates of the origin of the second camera coordinate system in the first camera coordinate system are (B, 0), and thus the coordinates between the first camera and the second camera differ by only one baseline length B.
It should be noted that, the origin of coordinates corresponding to the first three-dimensional coordinate and the second three-dimensional coordinate may be a world coordinate system, and the second depth value corresponding to the second pixel may refer to the calculation process of the first depth value, which is not described herein.
Fig. 4 is a schematic diagram of another three-dimensional measurement procedure of surface structured light provided in the present application. In an alternative embodiment, as shown in fig. 4, the control unit is specifically configured to:
S301, determining a first phase map according to gray values of first pixels in the first projection raster image.
S302, determining a second phase map according to the gray value of each second pixel in the second projection raster image.
And S303, carrying out three-dimensional correction on the first phase diagram and the second phase diagram so that the ordinate of corresponding pixels of the first projection grating image and the second grating image in the horizontal direction are the same.
S304, searching whether second pixels matched with first pixels in the first projection grating image exist in the second projection grating image.
The descriptions of steps S301 to S304 may be referred to the contents of steps S101 to S104, and are not repeated here.
And S305, if the first depth value exists, respectively calculating the first depth value corresponding to each first pixel in the matching point pair based on a triangulation algorithm.
S306, if the first depth value does not exist and the phase value corresponding to the first pixel is determined to be effective, determining the first depth value corresponding to the first pixel based on the first calibration function.
Fig. 5 is a schematic diagram of a triangulation algorithm provided in the embodiment of the present application, where if there is a second pixel matched with a first pixel, a first depth value corresponding to the first pixel may be calculated based on the triangulation algorithm, and the specific calculation method is as follows:
Recording a first pixel p on a first projection raster image1 (ul, vl) and a second pixel p on the second projection raster image2 (ur, vr), i.e. vl=vr=v. The first pixel p1 And a second pixel p2 The corresponding depth value d can be calculated as follows:
disparity=(ul-cxl)-(ur-cxr)
d=B*fm/disparity
as shown in fig. 5, cxl and cxr are the abscissa of the principal point after the stereo correction of the first camera and the second camera, the disparity is the disparity value, B is the base line length, that is, the distance between the optical centers of the first camera and the second camera, fm is the focal length after the stereo correction of the first camera, ol is the coordinate position of the optical center of the first camera, and Or is the coordinate position of the optical center of the second camera. Based on the formula, the whole first projection grating image is traversed, and the first depth values corresponding to all successfully matched first pixels can be calculated.
Fig. 6 is a schematic diagram of another three-dimensional measurement procedure of surface structured light provided in the present application. In an alternative embodiment, as shown in fig. 6, the determining the first phase map according to the gray value of each first pixel in the first projection raster image includes:
s401, determining a first phase principal value function according to gray values of first pixels in the first projection raster image.
For a better understanding of the present application, the description will be continued taking the first calibration pixel as an example. Fig. 7 is a schematic diagram of a phase principal value function provided in the present application, and fig. 7 (a) shows a waveform schematic diagram of a gray value function corresponding to each sub-calibration projection grating image. Fig. 7 (b) shows a waveform diagram of a first calibration phase principal value function corresponding to a first calibration pixel, wherein the horizontal axis represents the abscissa of the pixel. Based on the above gray value calculation formula, the first calibration phase principal value function corresponding to the first calibration pixel may be expressed as:
It should be noted that, the determination of the first phase principal value function corresponding to the first pixel may refer to the determination of the first calibration phase principal value function corresponding to the first calibration pixel, which is not described herein.
It will be appreciated that since the tangent function is periodic, the phase distribution obtained by the arctangent function is periodic and discontinuous, and that a phase unwrapping operation is also required in order to obtain monotonically increasing absolute phase values.
S402, converting the first phase principal value function based on the multi-frequency heterodyne principle to obtain a first absolute phase function.
Fig. 8 is a waveform schematic diagram of a first calibration absolute phase function according to an embodiment of the present application, wherein,the horizontal axis represents the abscissa of the pixel, and the vertical axis represents the phase value. Based on the above description, the frequency of each group of calibration grating images in the three-frequency four-step phase shift calibration grating images is recorded as f respectively1 、f2 And f3 Wherein f1 <f2 <f3 . In the specific conversion, the frequency f is passed first1 And f2 、f2 And f3 The frequency of the phase principal value function synthesis of (2) is f12 =f2 -f1 And f23 =f3 -f2 Is a phase function phi of12 (x, y) and phi23 (x, y), and then the frequency is f12 And f23 The phase function synthesis frequency of f123 =f23 -f12 Phase function phi of =1123 (x, y). Wherein phi is12 (x, y) for illustration, the principle of phase synthesis is as follows:
Wherein phi is1 (x,y)、φ2 (x, y) and phi12 The frequencies of (x, y) are f1 、f2 And f12 The calculation formula thereof can be referred to the calculation formula in step S401.
After the phase synthesis is completed, the frequency of use is f123 Phase function phi of =1123 (x, y) vs. phi12 (x, y) and phi23 (x, y) phase unwrapping to obtain ψ12 (x, y) and ψ23 (x, y) followed by a unwrapped phase function ψ12 (x, y) and ψ23 (x, y) vs. phi1 (x,y)、φ2 (x, y) and phi3 (x, y) expansion to obtain psi1 (x,y)、ψ2 (x, y) and ψ3 (x,y)。
Wherein, is phi1 (x, y) for example, the principle formula of phase unwrapping is:
ψ12 (x,y)=φ12 (x,y)+2π·INT[(φ123 (x,y)f12 /f12312 (x,y))/2π]
ψ1 (x,y)+2π·INT[(ψ12 (x,y)f1 /f121 (x,y))/2π]
wherein INT is a rounding function, see FIG. 8, and curve L1 is a function φ1 Waveform diagram of (x, y), curve L2 is a function ψ1 (x, y) waveform, from which it can be seen that the converted function ψ1 (x, y) is monotonically increasing.
Based on the above description, three absolute phase functions ψ can be obtained after phase unwrapping1 (x,y)、ψ2 (x, y) and ψ3 (x, y) and adding the three absolute phase functions to obtain a final absolute phase function ψ (x, y). Psi phi type2 (x, y) and ψ3 The conversion process of (x, y) can be seen in ψ1 The conversion process of (x, y) is not described in detail herein.
It is understood that, based on the determined absolute phase function, a stereo matching operation may be performed, and the description of the stereo matching operation may be referred to the relevant portions above, which are not described herein.
S403, determining a first phase diagram according to the first absolute phase function.
Based on the above description, after the first absolute phase function is obtained, the first absolute phase value corresponding to each first pixel may be calculated therefrom, and these absolute values may be stored in a matrix to obtain a first phase map.
It should be noted that, the calculation of the second phase map may refer to the calculation of the first phase map, which is not described herein.
In an optional embodiment, the control unit is further configured to calculate a modulation gray level corresponding to each first pixel according to a gray level value of each sub-grating image at the first pixel; if the modulation gray level corresponding to the first pixel is smaller than the preset modulation gray level threshold value, the phase value corresponding to the first pixel is invalid.
Wherein, the determination of whether the phase value corresponding to the first pixel is valid or not can be determined according to the modulation gray scale constraint function, that is, if the phase value corresponding to the first pixel is smaller than the preset modulation gray scale threshold τ1 In this case, the phase corresponding to the first pixel (x, y) can be consideredThe value is invalid.
In an optional embodiment, the control unit is further configured to calculate an average gray value deviation corresponding to the first pixel according to the gray value of each sub-grating image on the first pixel; if the average gray value deviation corresponding to the first pixel is larger than the preset average gray value deviation threshold, determining that the phase value corresponding to the first pixel is invalid.
The calculation formula of the average gray value deviation corresponding to each first pixel may refer to the calculation formula of the average gray value deviation corresponding to the first calibration pixel, which is not described herein.
In summary, in the measurement method provided by the embodiment of the present application, when an image overexposure phenomenon exists in the first projection grating image or the second projection grating image, the control unit may substitute the phase value of each first pixel or each second pixel in the overexposure area into the first calibration function or the second calibration function, so as to solve each first depth value or each second depth value corresponding to each first pixel or each second pixel, and further obtain a first three-dimensional coordinate or a second three-dimensional coordinate corresponding to each first pixel or each second pixel, so as to avoid the problem of loss of measurement data in the overexposure area, thereby obtaining more accurate three-dimensional feature parameters.
In addition, compared with a mode of reducing the exposure time of the image acquisition equipment to relieve the overexposure phenomenon, the method can avoid the problem that the area with decoding failure is increased due to the excessively low brightness; in addition, compared with the multiple exposure fusion algorithm, the image acquisition device is controlled to shoot a plurality of groups of images with different exposure time, the high exposure image and the low exposure image are used for respectively coping with shadow and overexposure areas according to shooting instructions sent by the control unit and fusing results so as to achieve the purpose of increasing the integrity of measured data.
Optionally, the invention further provides a surface structured light three-dimensional measurement method, which is applied to a control unit in a surface structured light three-dimensional measurement system, wherein the surface structured light three-dimensional measurement system comprises: the projector, the first image acquisition device, the second image acquisition device and the control unit, wherein the projector, the first image acquisition device and the second image acquisition device are respectively and electrically connected with the control unit, the projector is positioned between the first image acquisition device and the second image acquisition device, and the orientation is consistent with the depth direction, and the method comprises the following steps:
receiving a first projection grating image and a second projection grating image sent by a first image acquisition device and a second image acquisition device; determining a first three-dimensional coordinate corresponding to each first pixel in the first projection grating image and a second three-dimensional coordinate corresponding to each second pixel in the second projection grating image based on a preset calibration function; according to the first three-dimensional coordinate and the second three-dimensional coordinate, three-dimensional characteristic parameters of the object to be measured are obtained, wherein the preset calibration function comprises: the first calibration function is used for representing the functional relation between the depth value and the phase value corresponding to each first pixel in the first image acquisition equipment, and the second calibration function is used for representing the functional relation between the depth value and the phase value corresponding to each second pixel in the second image acquisition equipment.
For the description of each step, reference may be made to the related steps described above, which are not repeated here.
By applying the embodiment of the application, when the image overexposure phenomenon exists in the first projection grating image or the second projection grating image, the control unit can substitute the phase value of each first pixel or each second pixel in the overexposure area into the first calibration function or the second calibration function, so that each first depth value or each second depth value corresponding to each first pixel or each second pixel is solved, and further, each first three-dimensional coordinate or each second three-dimensional coordinate corresponding to each first pixel or each second pixel is obtained, and the problem of loss of measurement data of the overexposure area is avoided, so that more accurate three-dimensional characteristic parameters are obtained.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.
The integrated units implemented in the form of software functional units described above may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (english: processor) to perform part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: u disk, mobile hard disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
The foregoing is merely a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and variations may be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application. It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. The foregoing is merely a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and variations may be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.

Claims (10)

the control unit is used for determining a first three-dimensional coordinate corresponding to each first pixel in the first projection grating image and a second three-dimensional coordinate corresponding to each second pixel in the second projection grating image based on a preset calibration function; according to the first three-dimensional coordinate and the second three-dimensional coordinate, three-dimensional characteristic parameters of an object to be measured are obtained, wherein the preset calibration function comprises: the first calibration function is used for representing the functional relation between the depth value and the phase value corresponding to each first pixel in the first image acquisition equipment, and the second calibration function is used for representing the functional relation between the depth value and the phase value corresponding to each second pixel in the second image acquisition equipment.
CN202410009447.9A2024-01-022024-01-02Surface structured light three-dimensional measurement system and methodPendingCN117824532A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202410009447.9ACN117824532A (en)2024-01-022024-01-02Surface structured light three-dimensional measurement system and method

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202410009447.9ACN117824532A (en)2024-01-022024-01-02Surface structured light three-dimensional measurement system and method

Publications (1)

Publication NumberPublication Date
CN117824532Atrue CN117824532A (en)2024-04-05

Family

ID=90520737

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202410009447.9APendingCN117824532A (en)2024-01-022024-01-02Surface structured light three-dimensional measurement system and method

Country Status (1)

CountryLink
CN (1)CN117824532A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN118032790A (en)*2024-04-152024-05-14季华实验室Mirror-like object defect detection method and three-dimensional reconstruction method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN118032790A (en)*2024-04-152024-05-14季华实验室Mirror-like object defect detection method and three-dimensional reconstruction method

Similar Documents

PublicationPublication DateTitle
CN110514143B (en)Stripe projection system calibration method based on reflector
CN100520285C (en)Vision measuring method for projecting multiple frequency grating object surface tri-dimensional profile
Liu et al.Real-time 3D surface-shape measurement using background-modulated modified Fourier transform profilometry with geometry-constraint
US20120176478A1 (en)Forming range maps using periodic illumination patterns
CN114739322B (en) A three-dimensional measurement method, equipment and storage medium
CN105547190B (en)3 D measuring method and device based on double angle unifrequency fringe projections
US9147279B1 (en)Systems and methods for merging textures
CN116802688A (en)Apparatus and method for correspondence analysis within an image
CN118640829B (en)Binocular structured light measuring method and system based on complete polarization coding
CN119478254A (en) Three-dimensional reconstruction method and system based on polarization fringe projection structured light fusion
CN117824532A (en)Surface structured light three-dimensional measurement system and method
Liu et al.A novel phase unwrapping method for binocular structured light 3D reconstruction based on deep learning
Ma et al.A multidistance constraint method for three-dimensional reconstruction with coaxial fringe projection measurement system
CN116416294A (en)Accurate three-dimensional reconstruction method for object with inconsistent reflectivity
CN116205843A (en) A 3D Point Cloud Acquisition Method Based on Adaptive Stripe Iteration for High Anti-aircraft Engine Blades
CN111121663A (en)Object three-dimensional topography measurement method, system and computer-readable storage medium
CN116385653A (en) A self-supervised method and device for 3D imaging based on monocular high-frequency fringes
CN116518869A (en)Metal surface measurement method and system based on photometric stereo and binocular structured light
WO2023109960A1 (en)Three-dimensional scanning processing method and apparatus and three-dimensional scanning device
CN117830504A (en)Method and device for reconstructing three-dimensional model, electronic equipment and storage medium
CN111102938B (en)Object three-dimensional topography measuring method, system and computer readable storage medium
Chen et al.Pixel-wise phase map fusion technique for high dynamic range 3D shape measurement
CN119006464B (en) Defect detection method, device, system, computer equipment and readable storage medium
CN120445091B (en)Rapid measurement system of cuboid workpiece under three-coordinate structure light
CN119991789A (en) A method and system for three-dimensional posture recognition of conductor tension clamp based on structured light

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp