Disclosure of Invention
The present invention is directed to overcome the above-mentioned drawbacks of the prior art, and provides a single-camera multi-view single-camera image acquisition system and a three-dimensional reconstruction method based on a rotating bipartite prism.
The purpose of the invention can be realized by the following technical scheme:
a single-camera image acquisition system based on a rotary bipartite prism comprises a camera device and a rotary bipartite prism device, wherein the camera device comprises a camera and a camera support for supporting the camera;
the rotating bipartite prism device comprises a bipartite prism, a prism supporting structure, a rotating mechanism and an outer shell for supporting the rotating bipartite prism device, wherein the bipartite prism is fixedly arranged in the central area of the prism supporting structure, and the output end of the rotating mechanism is connected with the prism supporting structure and is used for driving the prism supporting structure to rotate on a vertical plane;
the detection end of the camera is aligned with the bipartite prism.
Further, the target surface of the camera and the back surface of the bipartite prism satisfy a parallel relationship, and the optical axis of the camera intersects and is perpendicular to the top ridge line opposite to the back surface of the bipartite prism.
Further, rotary mechanism is torque motor, including torque motor rotor, torque motor brush and torque motor stator, prism bearing structure connects torque motor rotor, torque motor stator installs on the shell body.
The invention also provides a three-dimensional reconstruction method of the single-camera image acquisition system based on the rotating bipartite prism, which comprises the following steps:
system construction and parameter calibration: adjusting the position and the posture of the camera device and the rotary bipartite prism device to construct a single-camera imaging system and a working coordinate system thereof; acquiring internal parameters of the camera and an axial distance between the camera and the bipartite prism by using a visual calibration method;
acquiring a multi-view image sequence: the rotating mechanism is controlled to drive the bipartite prism to rotate, and the camera is used for collecting double-view images containing target information at the corner position of each bipartite prism to form a multi-view target image sequence for three-dimensional reconstruction;
stereo matching and cross optimization: deducing a dynamic virtual binocular system model equivalent to the single-camera imaging system according to the direction of a camera visual axis after deflection of a bipartite prism and the rotation angle of the bipartite prism, establishing an epipolar constraint relation of a double-visual-angle image corresponding to each bipartite prism rotation angle position, searching a homonymy image point in the double-visual-angle image through a window matching algorithm, and performing cross inspection and optimization on the homonymy image point in the double-visual-angle images at different bipartite prism rotation angle positions to realize stereo matching;
three-dimensional reconstruction and point cloud filtering: acquiring initial estimation of three-dimensional point cloud of a target according to the homonymous image point of the double-view-angle image corresponding to the corner position of the bipartite prism; and supplementing point cloud information missing from the initially estimated three-dimensional point cloud according to the homonymous image points of the double-view-angle images corresponding to the corner positions of the other bipartite prisms, so as to update the three-dimensional point cloud, and then carrying out noise filtering to obtain a final three-dimensional point cloud reconstruction result.
Further, in the step of system construction and parameter calibration, the step of constructing the single-camera imaging system is specifically to adjust the positions and postures of the camera device and the rotary bipartite prism device so as to ensure the parallel relationship between the camera target surface and the back surface of the bipartite prism, the perpendicular relationship between the camera optical axis and the crest line at the top of the bipartite prism, and the axial distance relationship between the camera and the bipartite prism;
specifically, the working coordinate system of the single-camera imaging system is established, an origin O is fixed at the optical center position of the camera, a Z axis coincides with the optical axis direction of the camera, an X axis and a Y axis are both orthogonal to the Z axis, the X axis corresponds to the line scanning direction of the camera image sensor, and the Y axis corresponds to the column scanning direction of the camera image sensor.
Further, in the stereo matching and cross optimization steps, the derivation process of the dynamic virtual binocular system model is specifically,
calculating two symmetrical directions d about the optical axis direction of the single-camera imaging system after the camera visual axis is deflected by the bipartite prism by using a ray tracing methodLAnd dRDetermining two imaging visual angles corresponding to the rotation angle of any bipartite prism; deriving the dynamic virtual binocular system model according to the change relation of the camera visual axis orientation along with the rotation angle of the bipartite prism;
the calculation expression of the direction of the camera visual axis after deflection of the bipartite prism is as follows:
do=[0,0,1]T
in the formula (d)oIs the optical axis direction, n, of the single-camera imaging systemLIs the normal vector of the left side of the bipartite prism, nRIs the normal vector of the right side of the bipartite prism, nBThe normal vector of the back of the bipartite prism is an included angle between the side face and the back of the bipartite prism, n is a refractive index of a material used by the bipartite prism, and omega is a rotation angle of the bipartite prism;
the dynamic virtual binocular system model comprises a left virtual camera alpha machine and a right virtual camera, and the calculation expressions of the rotation matrix and the translation vector of the left virtual camera and the right virtual camera relative to the actual camera under any bipartite prism rotation angle omega are as follows:
in the formula, RL(omega) is a rotation matrix of the left virtual camera relative to the actual camera under any bipartite prism rotation angle omega, tL(omega) is the translation vector of the left virtual camera relative to the actual camera under any bipartite prism rotation angle omega, RR(omega) is a rotation matrix of the right virtual camera relative to the actual camera under any bipartite prism rotation angle omega, tRAnd (omega) is a translation vector of the right virtual camera relative to the actual camera under an arbitrary rotation angle omega of the bipartite prism, Rot represents a certain angle of rotation around an axis direction defined by an outer product of two vectors, the angle is determined by a vector cosine law, and g is a distance from an optical center of the actual camera to the back of the bipartite prism.
Further, in the system construction and parameter calibration step, the stereo matching and cross optimization step, the dynamic virtual binocular system model includes a basis matrix between the left virtual camera and the right virtual camera at any bipartite prism rotation angle ω, and a calculation expression of the basis matrix is as follows:
F(ω)=(Aint)-T(RLR)-1TLR(Aint)-1
RLR=RL(ω)RR(ω)-1
tLR=tL-RL(ω)RR(ω)-1tR
in the formula, AintIs an internal parameter matrix, R, of the cameraLRIs a relative rotation matrix of the left virtual camera and the right virtual camera, TLRAn oblique symmetric matrix corresponding to the relative translation vector;
and multiplying the base matrix F (omega) of the left virtual camera and the right virtual camera by the homogeneous coordinates of the image points contained in one half part of the dual-view angle image to obtain the positions of the image points corresponding to the epipolar lines in the other half part of the image, thereby obtaining the epipolar constraint relation.
Further, in the step of stereo matching and cross optimization, the cross inspection and optimization specifically includes filtering out homonymous image points with too large deviation from the polar line intersection points according to the principle that homonymous image points are theoretically located at the intersection points of the plurality of polar lines.
Further, in the three-dimensional reconstruction and point cloud filtering step, the calculation expression of the initial estimation of the three-dimensional point cloud is as follows:
in the formula, P
iAs a three-dimensional point cloud collection
The three-dimensional coordinates of the medium element i,
for inclusion in the left half of a dual view imageThe homogeneous pixel coordinates of the image points of the same name,
respectively homogeneous pixel coordinates of the image points of the same name contained in the right half of the dual-view image,
is a positive integer set; lambda [ alpha ]
LIs composed of
Corresponding to the scale factor, λ, of the projected light vector
RIs composed of
Corresponding to the scale factor of the projected ray vector.
Further, in the three-dimensional reconstruction and point cloud filtering step, the noise filtering specifically includes performing point cloud filtering according to the deviation of the three-dimensional point cloud before and after updating, and the calculation process of the point cloud filtering at each time is represented as:
in the formula (I), the compound is shown in the specification,
for filtered three-dimensional point cloud sets, P
iestimateFor the initially estimated elements in the three-dimensional point cloud set, P
iupdateTo update the elements in the computed three-dimensional point cloud set, ε is the deviation threshold between the updated point cloud and the initial estimate.
Compared with the prior art, the invention has the following advantages:
(1) according to the invention, the rotary bipartite prism device is introduced in front of the single camera, and the single camera can synchronously acquire image information of two symmetrical visual angles by utilizing the beam splitting effect of the bipartite prism, so that the compactness of the whole structure is ensured; the rotation mechanism drives the bipartite prism to rotate, so that the visual axis direction and the visual field range of the imaging system are effectively increased, the problem of information loss caused by factors such as movement and shielding can be solved to a certain extent, and the precision, the efficiency, the implementation flexibility and the dynamic adaptability of single-camera multi-view three-dimensional matching and three-dimensional reconstruction can be effectively improved by a method for capturing multi-view target information through a dynamic binocular vision system.
(2) The method combines the traditional stereoscopic vision calculation theory and the dynamic virtual binocular system model, realizes simplified description of the single-camera multi-view imaging process and efficient processing of redundant image information, and can effectively improve the precision, flexibility and adaptability of single-camera three-dimensional reconstruction.
(3) The invention utilizes the multi-polar line constraint and cross inspection method of the multi-view image sequence, not only can screen out the homonymous image points which are wrongly matched, but also can supplement the homonymous image points which are not contained in a specific view angle, can improve the accuracy and the rapidity of the stereo matching of the multi-view image with lower operation cost, and particularly can provide an effective solution for the problem of the stereo matching of a weak texture area.
(4) The invention does not need to require the camera to carry out any motion, does not depend on any form of cooperative mark or introduces an optical element with a complex structure, realizes multi-view image capture and three-dimensional reconstruction only by virtue of the rotation motion of the refraction type bipartite prism, can ensure the structural compactness and the disturbance resistance of an imaging system, and can provide a potential technical approach for the application fields of mode identification, product detection and the like.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
Example 1
The embodiment provides a single-camera image acquisition system based on a rotary bipartite prism, which comprises a camera device and a rotary bipartite prism device, wherein the rotary bipartite prism device is used for changing the propagation direction of imaging light rays in a camera view field so as to generate two symmetrical imaging view angles, and the camera device is used for synchronously acquiring and recording target image information under the two imaging view angles; the camera device comprises a camera and a camera support, wherein the camera support is used for adjusting the posture and the angle of the camera; the rotary bipartite prism device comprises a bipartite prism assembly, a rotary mechanism and an outer shell; the rotating mechanism is used for driving the bisection prism assembly to rotate, and the outer shell is used for rotating the mechanism and protecting the bisection prism assembly; the axial distance between the camera device and the rotating bipartite prism device is allowed to be adjusted within a certain range, and more degrees of freedom are provided for multi-view image capturing and three-dimensional reconstruction.
Furthermore, the bipartite prism assembly comprises a bipartite prism and a prism support structure, the bipartite prism is arranged in the central area of the prism support structure in a glue bonding mode or a spring plate fixing mode, and the prism support structure is used for fixing and supporting the bipartite prism.
Furthermore, the rotating mechanism adopts a torque motor direct drive mode or a gear transmission mode, a synchronous belt transmission mode, a worm and gear transmission mode and the like, and the torque motor comprises a rotor and a stator; the two-prism assembly is in threaded connection with the torque motor rotor through the prism supporting structure, and the torque motor stator is installed on the outer shell in a threaded connection mode; the torque motor drives the bipartite prism assembly to rotate in the outer shell.
Furthermore, the target surface of the camera and the back surface of the bipartite prism meet the parallel relation, the optical axis of the camera is intersected and perpendicular with the top ridge line opposite to the back surface of the bipartite prism, and meanwhile, the view field of the camera is guaranteed not to be shielded by the rotating bipartite prism device.
The embodiment further provides a three-dimensional reconstruction method adopting the single-camera image acquisition system based on the rotating bipartite prism, which comprises the following steps:
s1, system construction and parameter calibration: constructing a single-camera imaging system and a working coordinate system thereof according to the relative position relationship between the camera and the bipartite prism, and acquiring internal parameters of the camera and the distance between the camera and the bipartite prism in the optical axis direction by using a visual calibration method;
s2, multi-view image sequence acquisition: the rotation angle change of the bipartite prism is realized by controlling the rotating mechanism, and a camera is utilized to collect double-view images containing target information at the rotation angle position of each bipartite prism to generate a multi-view target image sequence for three-dimensional reconstruction;
s3, stereo matching and cross optimization: establishing a polar constraint relation of the dual-view images corresponding to the corner positions of each bipartite prism by combining a dynamic virtual binocular system model, searching homonymous image points contained in the dual-view images through a window matching algorithm, and simultaneously performing cross check and optimizing a stereo matching result of the multi-view image sequence by using multi-polar constraint provided by image sequences corresponding to different bipartite prism corners;
s4, three-dimensional reconstruction and point cloud filtering: utilizing the homonymous image points contained in the collected image at the corner position of the specific bipartite prism, and calculating and recovering the position coordinates of the corresponding target point by combining the triangulation principle to obtain the initial estimation of the three-dimensional point cloud; and then, redundant stereo matching provided by images acquired at the corner positions of other bipartite prisms is utilized to supplement point cloud information missing in initial estimation, and noise possibly existing in the three-dimensional point cloud is gradually filtered.
Further, the step S1 specifically includes:
s11, constructing an imaging system consisting of a single camera and a rotary bipartite prism device, and sequentially adjusting the postures of the camera and the bipartite prism device to ensure the parallel relation between the target surface of the camera and the back surface of the bipartite prism, the vertical relation between the optical axis of the camera and the crest line at the top of the bipartite prism and the axial distance relation between the camera and the bipartite prism;
s12, establishing a working coordinate system O-XYZ of the imaging system, fixing an origin O at the optical center position of the camera, enabling a Z axis to coincide with the optical axis direction of the camera, enabling an X axis and a Y axis to be orthogonal to the Z axis, and enabling the X axis and the Y axis to respectively correspond to the row scanning direction and the column scanning direction of the camera image sensor;
s13, acquiring internal parameters of the camera and distortion coefficients of the lens by adopting a traditional vision calibration method such as a Zhangyingyou calibration method, a direct linear transformation method or a two-step calibration method, and adjusting the axial distance between the camera and the bipartite prism by the aid of measuring tools such as a vernier caliper and a laser interferometer.
Further, in step S2, the dichotomous prism assembly is driven by the rotating mechanism to successively reach m kinds of rotational angle positions, and the camera is triggered to acquire a corresponding dual-view image immediately after the dichotomous prism reaches the specified rotational angle position, wherein the motion control of the rotating mechanism and the image acquisition triggering of the camera are both realized by software.
Further, the step S3 specifically includes:
s31, calculating the direction of the camera visual axis after deflection of the bipartite prism by using a ray tracing method, and determining two imaging visual angles corresponding to the rotation angle of any bipartite prism;
s32, deducing a dynamic virtual binocular system model equivalent to the imaging system according to the change relation of the camera visual axis orientation along with the rotation angle of the bipartite prism, and determining the position posture and the motion rule of the virtual binocular system;
s33, calculating a basic matrix and a change rule of the dynamic virtual binocular system according to internal parameters and external parameters of the virtual binocular system under any bipartite prism corner by referring to a traditional binocular vision theory;
s34, deriving epipolar constraint relations among the dual-view images collected by the system under any bipartite prism corner according to the basic matrix of the dynamic virtual binocular system, and thus constructing multi-epipolar constraint relations of multi-view image sequences corresponding to different bipartite prism corners;
s35, polar line constraint between the left virtual camera and the right virtual camera at a specific prism corner position is utilized, meanwhile, a proper window matching algorithm is combined to search for the homonymous image points contained in the dual-view image, polar line constraint of the homonymous image points in the dual-view image corresponding to other prism corner positions is determined on the basis, and the homonymous image points with overlarge polar line intersection deviation are filtered according to the principle that the homonymous image points are theoretically located at the intersection point positions of a plurality of polar lines.
Further, in the step S31, the camera visual axis is deflected by the bipartite prism and then points to two symmetric directions d about the system optical axis directionLAnd dRThe ray tracing method can obtain:
wherein
And
are all intermediate variables, d
o=[0,0,1]
TIs the optical axis direction of a single-camera imaging system, n
LIs the normal vector of the left side of the bipartite prism, n
RIs the normal vector of the right side of the bipartite prism, n
BIs a normal vector of the back of the bipartite prism and is a side of the bipartite prismThe included angle between the two prisms and the back surface, n is the refractive index of the material used by the two-half prism, and omega is the rotation angle of the two-half prism; the normal vectors of the side surface and the back surface of the bipartite prism are respectively as follows:
further, in step S32, the dynamic virtual binocular system is composed of two symmetrically distributed virtual cameras, and is used to simplify and describe the process of acquiring the dual-view image by the cameras under the action of the rotating bipartite prism device; the internal parameters of the two virtual cameras are completely the same as those of the actually used cameras, the external parameters of the two virtual cameras mainly depend on the structural parameters and the motion parameters of the rotating bipartite prism, and the external parameters are expressed as follows under the rotating angle omega of any bipartite prism:
in the formula, RL(omega) is a rotation matrix of the left virtual camera relative to the actual camera under any bipartite prism rotation angle omega, tL(omega) is the translation vector of the left virtual camera relative to the actual camera under any bipartite prism rotation angle omega, RR(omega) is a rotation matrix of the right virtual camera relative to the actual camera under any bipartite prism rotation angle omega, tRAnd (omega) is a translation vector of the right virtual camera relative to the actual camera under an arbitrary rotation angle omega of the bipartite prism, Rot represents a certain angle of rotation around an axis direction defined by an outer product of two vectors, the angle is determined by a vector cosine law, and g is a distance from an optical center of the actual camera to the back of the bipartite prism.
Further, in step S33, a basic matrix exists between the left and right virtual cameras included in the dynamic virtual binocular system at any bipartite prism rotation angle ω:
F(ω)=(Aint)-T(RLR)-1TLR(Aint)-1
RLR=RL(ω)RR(ω)-1
tLR=tL-RL(ω)RR(ω)-1tR
in the formula, AintIs an internal parameter matrix, R, of the cameraLRIs a relative rotation matrix of the left virtual camera and the right virtual camera, TLRIs an oblique symmetric matrix corresponding to the relative translation vector.
Further, in step S34, the base matrix F (ω) of the left and right virtual cameras is multiplied by the homogeneous coordinates of the pixels included in one half of the dual view images, so as to obtain the positions of the pixels corresponding to the epipolar lines in the other half of the images; similarly, according to the change relationship between the left virtual camera position and the right virtual camera position and the half-prism rotation angle, the basic matrix and the corresponding polar line position between any two virtual camera positions under m kinds of half-prism rotation angles can be obtained by adopting the method, so that a series of redundant stereo matching constraint conditions are generated.
Further, in the step S35, the window matching algorithm may be selected from an existing Sum of Absolute Difference (SAD) algorithm, sum of squared error (SSD) algorithm, Normalized Cross Correlation (NCC) algorithm, and the like.
Further, the step S4 specifically includes:
s41, calculating initial three-dimensional point cloud distribution of the target by utilizing a triangulation principle according to a result of stereo matching of the dual-view image acquired at the first prism corner position;
s42, collecting each corresponding double-view-angle image at the corner positions of other prisms, and updating the three-dimensional point cloud information of the target by using a triangulation principle after completing stereo matching;
and S43, comparing the initial three-dimensional point cloud with the updated three-dimensional point cloud, supplementing data which are not contained in the initial estimation, continuously correcting and optimizing the three-dimensional point cloud corresponding to the image point with the same name by utilizing the gradually introduced redundant information, and filtering the data with larger deviation before and after updating as noise.
Further, in step S41, the corresponding three-dimensional point cloud is calculated according to the stereo matching result of the dual-view image, and the calculation process may be represented as:
in the formula, P
iAs a three-dimensional point cloud collection
The three-dimensional coordinates of the medium element i,
the homogeneous pixel coordinates of the same-name image points contained in the left half of the dual-view image,
respectively homogeneous pixel coordinates of the image points of the same name contained in the right half of the dual-view image,
is a positive integer set; lambda [ alpha ]
LIs composed of
Corresponding to the scale factor, λ, of the projected light vector
RIs composed of
The scale factors corresponding to the projected ray vectors can be eliminated by a simultaneous system of equations.
Further, in step S43, point cloud filtering is performed according to the deviation of the three-dimensional point cloud before and after updating, and each filtering calculation process is represented as:
in the formula (I), the compound is shown in the specification,
for filtered three-dimensional point cloud sets, P
iestimateFor the initially estimated elements in the three-dimensional point cloud set, P
iupdateTo update the elements in the computed three-dimensional point cloud set, ε is the deviation threshold between the updated point cloud and the initial estimate.
The embodiment also provides a specific implementation process of the single-camera image acquisition system and the three-dimensional reconstruction method based on the rotating bipartite prism, which are respectively described in detail below.
Single-camera image acquisition system based on rotary bipartite prism
As shown in fig. 1 to 5, the present embodiment provides a single-camera image capturing system based on a rotating bipartite prism, which includes a camera device and a rotating bipartite prism device. The camera device comprises a camera and a camera support, and the rotary bipartite prism device comprises a bipartite prism, a prism support structure, a rotary mechanism and an outer shell.
Thecamera device 1 specifically includes acamera 11 and acamera mount 12. Thecamera 11 is adjusted in position and attitude by thecamera mount 12 with the camera target surface parallel to the back surface of thehalf prism 21 and the viewing axis directed perpendicular to the top ridge of thehalf prism 21. Parameters such as the focal length, the field angle and the depth of field of thecamera 11 must be reasonably matched with parameters such as the included angle between the side surface and the back surface of thebipartite prism 21 and the refractive index so as to avoid the problem of field shielding.
The rotarybipartite prism device 2 comprises a bipartite prism assembly, a rotary mechanism and an outer housing. The bipartite prism assembly comprises abipartite prism 21 and aprism support structure 22, wherein thebipartite prism 21 is installed on a rectangular installation surface of the central area of theprism support structure 22 in a spring fixing or glue bonding mode, and theprism support structure 22 is provided with an arc-shaped slot hole in the circumferential direction to reduce the moment of inertia.
The rotating mechanism adopts a torque motor direct drive mode or a gear drive mode, a synchronous belt drive mode, a worm and gear drive mode and the like, and the torque motor direct drive mode is selected in the embodiment. The torque motor mainly comprises arotor 23, abrush 24 and astator 25, specifically, the bipartite prism assembly is fixedly connected with thetorque motor rotor 23 in a threaded connection mode, and thetorque motor stator 25 is fixed on the end face of theouter shell 26 in a threaded connection mode.
Theouter housing 26 provides fixing and protecting functions for the bipartite prism assembly and the torque motor, and the torque motor drives the bipartite prism assembly inside to rotate.
The axial distance between thecamera device 1 and the rotarybipartite prism device 2 can be dynamically adjusted according to specific application occasions and requirements, longitudinal change freedom degree is provided for a multi-view image capturing process, and richer image information is provided for a three-dimensional calculation reconstruction process.
This embodiment introduces rotatory bipartite prism device in camera the place ahead, can adjust the visual axis of camera pointing and formation of image visual angle wantonly through the beam splitting effect and the full circular rotary motion of bipartite prism to gather the multi-view target image sequence that contains abundant information, can effectively promote the precision and the efficiency that multi-view stereo matching and three-dimensional are rebuild. Compared with the existing single-camera three-dimensional reconstruction system using the cooperative marker or the reflector group, the three-dimensional reconstruction system of the embodiment does not need to use the cooperative marker as prior information, does not introduce a reflecting element sensitive to error disturbance, and can realize better structural compactness, imaging flexibility and environmental adaptability.
Single-camera multi-view three-dimensional reconstruction method based on rotating bipartite prism
As shown in fig. 6 to 8, the present embodiment provides a three-dimensional reconstruction method using the above single-camera image acquisition system based on a rotating bipartite prism, which specifically includes the following steps:
s1, system construction and parameter calibration
S11, constructing an imaging system consisting of thecamera device 1 and the rotarybipartite prism device 2, and sequentially adjusting the postures of thecamera 11 and thebipartite prism 21 to ensure the parallel relation between the target surface of the camera and the back surface of the bipartite prism, the vertical relation between the optical axis of the camera and the crest line of the top of the bipartite prism and the axial distance relation between the camera and the bipartite prism;
s12, establishing a working coordinate system O-XYZ of the imaging system, fixing an origin O at the optical center position of the camera, enabling a Z axis to coincide with the optical axis direction of the camera, enabling an X axis and a Y axis to be orthogonal to the Z axis, and enabling the X axis and the Y axis to respectively correspond to the row scanning direction and the column scanning direction of the camera image sensor;
s13, obtaining internal parameters of the camera and distortion coefficients of the lens by using a traditional visual calibration method such as a zhangzhengyou calibration method, a direct linear transformation method, or a two-step calibration method, which is adopted in this embodiment; determining the axial distance between the camera and the bipartite prism through measuring tools such as a vernier caliper and a laser interferometer, wherein the vernier caliper is adopted in the embodiment; the calibration method and the measurement method are mature methods in the prior art, and are not developed any more.
S2, Multi-view image sequence acquisition
The motion rule of the rotating mechanism is controlled by software, so that the bipartite prism is sequentially rotated to m-3 rotation angle positions which are recorded as omega1=0°、ω245 ° and ω390 °; and after the bipartite prism reaches the specified corner position, triggering the camera to acquire a corresponding double-view image containing target information through software, and finally obtaining a multi-view target image sequence for three-dimensional reconstruction.
S3, stereo matching and cross optimization
Establishing a polar constraint relation of the dual-view images corresponding to the corner positions of each bipartite prism by combining a dynamic virtual binocular system model, searching homonymous image points contained in the dual-view images through a window matching algorithm, and simultaneously performing cross check and optimizing a stereo matching result of the multi-view image sequence by using multi-polar constraint provided by image sequences corresponding to different bipartite prism corners;
s31, calculating the direction of the camera visual axis after deflection of the bipartite prism by using a ray tracing method, and determining two imaging visual angles corresponding to the rotation angle of any bipartite prism; the camera visual axis deflected by the bipartite prism is directed in two directions d symmetrical with respect to the system optical axis directionLAnd dRDerived from vector refraction law:
wherein
And
are all intermediate variables, d
o=[0,0,1]
TIs the optical axis direction of a single-camera imaging system, n
LIs the normal vector of the left side of the bipartite prism, n
RIs the normal vector of the right side of the bipartite prism, n
BThe normal vector of the back of the bipartite prism is an included angle between the side face and the back of the bipartite prism, n is a refractive index of a material used by the bipartite prism, and omega is a rotation angle of the bipartite prism; the normal vectors of the side surface and the back surface of the bipartite prism are respectively as follows:
in this embodiment, the angle between the side surface and the back surface of the bipartite prism is α 5 °, and the refractive index n is 1.52.
S32, deducing a dynamic virtual binocular system model equivalent to the imaging system according to the change relation of the camera visual axis orientation along with the rotation angle of the bipartite prism; the dynamic virtual binocular system consists of two virtual cameras which are symmetrically distributed and is used for simplifying and describing the process of acquiring the double-view-angle images by the cameras under the action of the rotating bipartite prism device; the internal parameters of the two virtual cameras are completely the same as those of the actually used cameras, the external parameters of the two virtual cameras mainly depend on the structural parameters and the motion parameters of the rotating bipartite prism, and the external parameters are expressed as follows under the rotating angle omega of any bipartite prism:
in the formula, RL(omega) is left side virtualThe rotation matrix t of the simulated camera relative to the actual camera under any rotation angle omega of the bipartite prismL(omega) is the translation vector of the left virtual camera relative to the actual camera under any bipartite prism rotation angle omega, RR(omega) is a rotation matrix of the right virtual camera relative to the actual camera under any bipartite prism rotation angle omega, tRAnd (omega) is a translation vector of the right virtual camera relative to the actual camera under an arbitrary rotation angle omega of the bipartite prism, Rot represents a certain angle of rotation around an axis direction defined by an outer product of two vectors, the angle is determined by a vector cosine law, and g is a distance from an optical center of the actual camera to the back of the bipartite prism.
S33, referring to the traditional binocular vision theory, according to the internal parameters and the external parameters of the virtual binocular system under the corner omega of any bipartite prism, the left virtual camera and the right virtual camera contained in the dynamic virtual binocular system meet the basic matrix:
F(ω)=(Aint)-T(RLR)-1TLR(Aint)-1
RLR=RL(ω)RR(ω)-1
tLR=tL-RL(ω)RR(ω)-1tR
in the formula, AintIs an internal parameter matrix, R, of the cameraLRIs a relative rotation matrix of the left virtual camera and the right virtual camera, TLRIs an oblique symmetric matrix corresponding to the relative translation vector.
S34, enabling the basic matrix F (omega) of the left virtual camera and the right virtual camera in the dynamic virtual binocular system and the bipartite prism to have a rotation angle omega1When the angle is 0 degrees, the homogeneous coordinates of image points contained in the left half part and the right half part in the collected image are multiplied, and the position of the image point corresponding to the epipolar line in the right half part and the left half part of the image can be obtained; and then according to the relation between the left virtual camera position and the right virtual camera position and the half-prism corner, a basic matrix between any two virtual camera positions under different half-prism corners and the corresponding polar line position can be obtained, so that a series of redundant stereo matching constraint conditions are generated.
S35, utilizing the corner position of the prism as omega
1Finding homonymous image points contained in the two-view image by combining an absolute Sum of Absolute Differences (SAD) algorithm, a sum of squared errors (SSD) algorithm, a Normalized Cross Correlation (NCC) algorithm and other window matching algorithms as epipolar constraint between the left virtual camera and the right virtual camera at 0 DEG
And
in this embodiment, an SAD algorithm is adopted; on the basis, the image points with the same name are determined
And
at omega
245 ° and ω
3Acquiring epipolar constraints within the image at 90 DEG, for each group of like-named image points
And
can determine 5 corresponding epipolar line positions; according to the principle that the homonymous image point of each visual angle is theoretically located at the intersection point position of 5 corresponding polar lines, filtering out homonymous image points with the polar line intersection point deviation exceeding a threshold value delta which is 0.5 pixel.
S4, three-dimensional reconstruction and point cloud filtering
S41, utilizing the corner position of the prism as omega1Corresponding image points pi of the acquired image at 0 DEGL1And piR1Calculating and recovering the position coordinates of the corresponding target points by combining a triangulation principle to obtain initial estimation of the three-dimensional point cloud; each target point PiThe calculation process of (a) can be expressed as:
in the formula,P
iAs a three-dimensional point cloud collection
The three-dimensional coordinates of the medium element i,
the homogeneous pixel coordinates of the same-name image points contained in the left half of the dual-view image,
respectively homogeneous pixel coordinates of the image points of the same name contained in the right half of the dual-view image,
is a positive integer set; lambda [ alpha ]
LAnd λ
RAre respectively as
And
the scale factors corresponding to the projected ray vectors can be eliminated by a simultaneous system of equations.
S42, sequentially utilizing the corner position of the prism as omega245 ° and ω3Recalculating the three-dimensional point cloud according to the stereo matching result of the acquired image when the angle is 90 degrees, wherein the calculation method is the same as the step S41;
s43, comparing the previous three-dimensional point cloud with the updated three-dimensional point cloud after each calculation is completed, and supplementing the originally contained data; gradually correcting and optimizing the distribution condition of the three-dimensional target point cloud by utilizing the corresponding relation of the image points with the same name, and simultaneously, taking the data with larger deviation before and after updating as noise for filtering, wherein the point cloud filtering process is expressed as follows:
in the formula (I), the compound is shown in the specification,
for filtered three-dimensional point cloud sets, P
iestimateFor the initially estimated elements in the three-dimensional point cloud set, P
iupdateFor updating elements in the calculated three-dimensional point cloud set, epsilon is a deviation threshold between the updated point cloud and the initial estimate; in this embodiment, the deviation threshold value ∈ is 1 mm.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.