Movatterモバイル変換


[0]ホーム

URL:


CN112330794A - Single-camera image acquisition system based on rotary bipartite prism and three-dimensional reconstruction method - Google Patents

Single-camera image acquisition system based on rotary bipartite prism and three-dimensional reconstruction method
Download PDF

Info

Publication number
CN112330794A
CN112330794ACN202011072913.6ACN202011072913ACN112330794ACN 112330794 ACN112330794 ACN 112330794ACN 202011072913 ACN202011072913 ACN 202011072913ACN 112330794 ACN112330794 ACN 112330794A
Authority
CN
China
Prior art keywords
prism
camera
bipartite
image
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011072913.6A
Other languages
Chinese (zh)
Other versions
CN112330794B (en
Inventor
李安虎
刘兴盛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji UniversityfiledCriticalTongji University
Priority to CN202011072913.6ApriorityCriticalpatent/CN112330794B/en
Publication of CN112330794ApublicationCriticalpatent/CN112330794A/en
Application grantedgrantedCritical
Publication of CN112330794BpublicationCriticalpatent/CN112330794B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明涉及一种基于旋转二分棱镜的单相机图像采集系统和三维重建方法,系统包括相机装置和旋转二分棱镜装置,相机装置包括相机和支撑该相机的相机支架;旋转二分棱镜装置包括二分棱镜、棱镜支撑结构、旋转机构和支撑旋转二分棱镜装置的外壳体;三维重建方法包括以下步骤:系统构建与参数标定,多视角图像序列采集,立体匹配与交叉优化,三维重建与点云滤波。与现有技术相比,本发明通过二分棱镜的旋转运动改变单台相机的成像视角,使其模拟实现动态双目视觉系统捕获多视角目标信息的功能,可以有效提升单相机多视角立体匹配与三维重建的精度、效率、实施灵活性和动态适应性。

Figure 202011072913

The invention relates to a single-camera image acquisition system and a three-dimensional reconstruction method based on a rotating bisecting prism. The system includes a camera device and a rotating bisecting prism device. The camera device includes a camera and a camera bracket supporting the camera; the rotating bisecting prism device includes a bisecting prism, A prism supporting structure, a rotating mechanism and an outer casing supporting a rotating bisection prism device; the three-dimensional reconstruction method includes the following steps: system construction and parameter calibration, multi-view image sequence acquisition, stereo matching and cross-optimization, three-dimensional reconstruction and point cloud filtering. Compared with the prior art, the present invention changes the imaging angle of view of a single camera through the rotational motion of the bisecting prism, so that it simulates the function of capturing multi-view target information by a dynamic binocular vision system, and can effectively improve the multi-view stereo matching and matching of single camera. Accuracy, efficiency, implementation flexibility, and dynamic adaptability of 3D reconstruction.

Figure 202011072913

Description

Single-camera image acquisition system based on rotary bipartite prism and three-dimensional reconstruction method
Technical Field
The invention relates to the field of multi-view three-dimensional reconstruction, in particular to a single-camera image acquisition system and a three-dimensional reconstruction method based on a rotating bipartite prism.
Background
The multi-view three-dimensional reconstruction is a technology for obtaining three-dimensional shape information of a space target by utilizing a plurality of image sequences captured under different views and recovering the three-dimensional shape information through a solid geometry vision principle, and has important application values in the fields of autonomous navigation, geographical mapping, space remote sensing and the like. The traditional double-camera or multi-camera vision system realizes multi-view image information acquisition by increasing the number of sensors, but the cost is that the complexity of the system is improved, the physical size is enlarged, the common view field is reduced, and the like. In comparison, the single-camera vision system has a simple structure and high integration level, and can provide a more economical, flexible and effective solution for three-dimensional target reconstruction or scene restoration by introducing additional optical elements or camera motion constraints and other modes to acquire a multi-view target image sequence.
The following prior studies propose several typical single-camera multi-view three-dimensional reconstruction systems and methods:
the prior art comprises the following steps: a "single-camera multi-angle space point coordinate measuring method" (zhao 31066;, xi, et al, publication No. CN 109141226a, publication date: 2019, 1 month and 4 days) discloses a method of pasting a plurality of mark points with known coordinates on a target surface and acquiring a multi-view target image by changing the shooting angle of a single camera. The prior art comprises the following steps: "a measuring system and method of arc guide rail type single camera" (great-day, publication number: CN 110645962a) discloses a method for shooting a target image sequence containing code points from multiple directions by using a single camera moving along an arc guide rail, and then calculating three-dimensional information of target measuring points by using a photogrammetric principle. The above method requires that the target surface has a cooperative mark satisfying a certain constraint condition, and requires that the camera position and the shooting angle are changed many times, so that the flexibility of the specific implementation and the universality of the actual application occasion are limited to a certain extent.
The prior art comprises the following steps: "Single-camera binocular vision apparatus" (Zhang Qingchuan et al, publication No.: CN 109856895A, published: 2019, 6/7) discloses a method of capturing region-of-interest image information at a viewing angle that is bilaterally symmetric and allows adjustment, using a single camera in combination with two sets of symmetrically distributed mirrors. The prior art comprises the following steps: a novel single-camera three-dimensional digital image correlation system (Pan et al, publication No. CN 110530286A, publication date: 2019, 12 and 3) using a light-combining prism discloses a method for realizing high-precision three-dimensional measurement by combining a single camera, an X-cube light-combining prism and a group of symmetrically distributed reflectors, fusing and recording target image information of different color channels to a camera target surface and then utilizing a digital image correlation algorithm. The method changes the imaging visual angle by depending on the beam deflection effect generated by at least one group of reflecting mirrors, and has the advantages that enough arrangement space and adjustment angle are provided to ensure the field range of three-dimensional reconstruction, and the compactness and the integration of the system and the inhibition and the adaptability of the system to error disturbance are sacrificed.
In summary, the prior art has the following disadvantages:
1. the flexibility of the specific implementation mode and the universality of the practical application occasion of the method for acquiring the multi-view target image by changing the position and the shooting angle of the camera are limited to a certain degree;
2. the method for acquiring the multi-view target image by adopting the multiple groups of symmetrically distributed reflectors has the advantages that enough arrangement space and adjustment angles are provided to ensure the field range of three-dimensional reconstruction, and the compactness and the integration of the system and the inhibition and the adaptability of the system to error disturbance are sacrificed.
Disclosure of Invention
The present invention is directed to overcome the above-mentioned drawbacks of the prior art, and provides a single-camera multi-view single-camera image acquisition system and a three-dimensional reconstruction method based on a rotating bipartite prism.
The purpose of the invention can be realized by the following technical scheme:
a single-camera image acquisition system based on a rotary bipartite prism comprises a camera device and a rotary bipartite prism device, wherein the camera device comprises a camera and a camera support for supporting the camera;
the rotating bipartite prism device comprises a bipartite prism, a prism supporting structure, a rotating mechanism and an outer shell for supporting the rotating bipartite prism device, wherein the bipartite prism is fixedly arranged in the central area of the prism supporting structure, and the output end of the rotating mechanism is connected with the prism supporting structure and is used for driving the prism supporting structure to rotate on a vertical plane;
the detection end of the camera is aligned with the bipartite prism.
Further, the target surface of the camera and the back surface of the bipartite prism satisfy a parallel relationship, and the optical axis of the camera intersects and is perpendicular to the top ridge line opposite to the back surface of the bipartite prism.
Further, rotary mechanism is torque motor, including torque motor rotor, torque motor brush and torque motor stator, prism bearing structure connects torque motor rotor, torque motor stator installs on the shell body.
The invention also provides a three-dimensional reconstruction method of the single-camera image acquisition system based on the rotating bipartite prism, which comprises the following steps:
system construction and parameter calibration: adjusting the position and the posture of the camera device and the rotary bipartite prism device to construct a single-camera imaging system and a working coordinate system thereof; acquiring internal parameters of the camera and an axial distance between the camera and the bipartite prism by using a visual calibration method;
acquiring a multi-view image sequence: the rotating mechanism is controlled to drive the bipartite prism to rotate, and the camera is used for collecting double-view images containing target information at the corner position of each bipartite prism to form a multi-view target image sequence for three-dimensional reconstruction;
stereo matching and cross optimization: deducing a dynamic virtual binocular system model equivalent to the single-camera imaging system according to the direction of a camera visual axis after deflection of a bipartite prism and the rotation angle of the bipartite prism, establishing an epipolar constraint relation of a double-visual-angle image corresponding to each bipartite prism rotation angle position, searching a homonymy image point in the double-visual-angle image through a window matching algorithm, and performing cross inspection and optimization on the homonymy image point in the double-visual-angle images at different bipartite prism rotation angle positions to realize stereo matching;
three-dimensional reconstruction and point cloud filtering: acquiring initial estimation of three-dimensional point cloud of a target according to the homonymous image point of the double-view-angle image corresponding to the corner position of the bipartite prism; and supplementing point cloud information missing from the initially estimated three-dimensional point cloud according to the homonymous image points of the double-view-angle images corresponding to the corner positions of the other bipartite prisms, so as to update the three-dimensional point cloud, and then carrying out noise filtering to obtain a final three-dimensional point cloud reconstruction result.
Further, in the step of system construction and parameter calibration, the step of constructing the single-camera imaging system is specifically to adjust the positions and postures of the camera device and the rotary bipartite prism device so as to ensure the parallel relationship between the camera target surface and the back surface of the bipartite prism, the perpendicular relationship between the camera optical axis and the crest line at the top of the bipartite prism, and the axial distance relationship between the camera and the bipartite prism;
specifically, the working coordinate system of the single-camera imaging system is established, an origin O is fixed at the optical center position of the camera, a Z axis coincides with the optical axis direction of the camera, an X axis and a Y axis are both orthogonal to the Z axis, the X axis corresponds to the line scanning direction of the camera image sensor, and the Y axis corresponds to the column scanning direction of the camera image sensor.
Further, in the stereo matching and cross optimization steps, the derivation process of the dynamic virtual binocular system model is specifically,
calculating two symmetrical directions d about the optical axis direction of the single-camera imaging system after the camera visual axis is deflected by the bipartite prism by using a ray tracing methodLAnd dRDetermining two imaging visual angles corresponding to the rotation angle of any bipartite prism; deriving the dynamic virtual binocular system model according to the change relation of the camera visual axis orientation along with the rotation angle of the bipartite prism;
the calculation expression of the direction of the camera visual axis after deflection of the bipartite prism is as follows:
Figure BDA0002715734980000041
Figure BDA0002715734980000042
Figure BDA0002715734980000043
do=[0,0,1]T
Figure BDA0002715734980000044
in the formula (d)oIs the optical axis direction, n, of the single-camera imaging systemLIs the normal vector of the left side of the bipartite prism, nRIs the normal vector of the right side of the bipartite prism, nBThe normal vector of the back of the bipartite prism is an included angle between the side face and the back of the bipartite prism, n is a refractive index of a material used by the bipartite prism, and omega is a rotation angle of the bipartite prism;
the dynamic virtual binocular system model comprises a left virtual camera alpha machine and a right virtual camera, and the calculation expressions of the rotation matrix and the translation vector of the left virtual camera and the right virtual camera relative to the actual camera under any bipartite prism rotation angle omega are as follows:
Figure BDA0002715734980000045
Figure BDA0002715734980000046
in the formula, RL(omega) is a rotation matrix of the left virtual camera relative to the actual camera under any bipartite prism rotation angle omega, tL(omega) is the translation vector of the left virtual camera relative to the actual camera under any bipartite prism rotation angle omega, RR(omega) is a rotation matrix of the right virtual camera relative to the actual camera under any bipartite prism rotation angle omega, tRAnd (omega) is a translation vector of the right virtual camera relative to the actual camera under an arbitrary rotation angle omega of the bipartite prism, Rot represents a certain angle of rotation around an axis direction defined by an outer product of two vectors, the angle is determined by a vector cosine law, and g is a distance from an optical center of the actual camera to the back of the bipartite prism.
Further, in the system construction and parameter calibration step, the stereo matching and cross optimization step, the dynamic virtual binocular system model includes a basis matrix between the left virtual camera and the right virtual camera at any bipartite prism rotation angle ω, and a calculation expression of the basis matrix is as follows:
F(ω)=(Aint)-T(RLR)-1TLR(Aint)-1
RLR=RL(ω)RR(ω)-1
tLR=tL-RL(ω)RR(ω)-1tR
in the formula, AintIs an internal parameter matrix, R, of the cameraLRIs a relative rotation matrix of the left virtual camera and the right virtual camera, TLRAn oblique symmetric matrix corresponding to the relative translation vector;
and multiplying the base matrix F (omega) of the left virtual camera and the right virtual camera by the homogeneous coordinates of the image points contained in one half part of the dual-view angle image to obtain the positions of the image points corresponding to the epipolar lines in the other half part of the image, thereby obtaining the epipolar constraint relation.
Further, in the step of stereo matching and cross optimization, the cross inspection and optimization specifically includes filtering out homonymous image points with too large deviation from the polar line intersection points according to the principle that homonymous image points are theoretically located at the intersection points of the plurality of polar lines.
Further, in the three-dimensional reconstruction and point cloud filtering step, the calculation expression of the initial estimation of the three-dimensional point cloud is as follows:
Figure BDA0002715734980000051
in the formula, PiAs a three-dimensional point cloud collection
Figure BDA0002715734980000052
The three-dimensional coordinates of the medium element i,
Figure BDA0002715734980000053
for inclusion in the left half of a dual view imageThe homogeneous pixel coordinates of the image points of the same name,
Figure BDA0002715734980000054
respectively homogeneous pixel coordinates of the image points of the same name contained in the right half of the dual-view image,
Figure BDA0002715734980000055
is a positive integer set; lambda [ alpha ]LIs composed of
Figure BDA0002715734980000056
Corresponding to the scale factor, λ, of the projected light vectorRIs composed of
Figure BDA0002715734980000057
Corresponding to the scale factor of the projected ray vector.
Further, in the three-dimensional reconstruction and point cloud filtering step, the noise filtering specifically includes performing point cloud filtering according to the deviation of the three-dimensional point cloud before and after updating, and the calculation process of the point cloud filtering at each time is represented as:
Figure BDA0002715734980000058
in the formula (I), the compound is shown in the specification,
Figure BDA0002715734980000059
for filtered three-dimensional point cloud sets, PiestimateFor the initially estimated elements in the three-dimensional point cloud set, PiupdateTo update the elements in the computed three-dimensional point cloud set, ε is the deviation threshold between the updated point cloud and the initial estimate.
Compared with the prior art, the invention has the following advantages:
(1) according to the invention, the rotary bipartite prism device is introduced in front of the single camera, and the single camera can synchronously acquire image information of two symmetrical visual angles by utilizing the beam splitting effect of the bipartite prism, so that the compactness of the whole structure is ensured; the rotation mechanism drives the bipartite prism to rotate, so that the visual axis direction and the visual field range of the imaging system are effectively increased, the problem of information loss caused by factors such as movement and shielding can be solved to a certain extent, and the precision, the efficiency, the implementation flexibility and the dynamic adaptability of single-camera multi-view three-dimensional matching and three-dimensional reconstruction can be effectively improved by a method for capturing multi-view target information through a dynamic binocular vision system.
(2) The method combines the traditional stereoscopic vision calculation theory and the dynamic virtual binocular system model, realizes simplified description of the single-camera multi-view imaging process and efficient processing of redundant image information, and can effectively improve the precision, flexibility and adaptability of single-camera three-dimensional reconstruction.
(3) The invention utilizes the multi-polar line constraint and cross inspection method of the multi-view image sequence, not only can screen out the homonymous image points which are wrongly matched, but also can supplement the homonymous image points which are not contained in a specific view angle, can improve the accuracy and the rapidity of the stereo matching of the multi-view image with lower operation cost, and particularly can provide an effective solution for the problem of the stereo matching of a weak texture area.
(4) The invention does not need to require the camera to carry out any motion, does not depend on any form of cooperative mark or introduces an optical element with a complex structure, realizes multi-view image capture and three-dimensional reconstruction only by virtue of the rotation motion of the refraction type bipartite prism, can ensure the structural compactness and the disturbance resistance of an imaging system, and can provide a potential technical approach for the application fields of mode identification, product detection and the like.
Drawings
FIG. 1 is an isometric view of the appearance of a single camera image acquisition system;
FIG. 2 is an assembly view of the structure of the rotary bipartite prism device, in which (a) is a front view of the rotary bipartite prism device and (b) is a sectional view taken along line A-A in (a);
fig. 3 is a schematic structural diagram of a bipartite prism, in which: (a) respectively, a front view, a left view, a top view and an axonometric view;
FIG. 4 is a schematic view of a prism support structure, wherein: (a) is a front view, and (B) is a sectional view B-B in (a);
fig. 5 is a schematic structural diagram of the outer shell, wherein: (a) is a front view, and (b) is a cross-sectional view of C-C in (a);
FIG. 6 is a basic flow diagram of a three-dimensional reconstruction method;
FIG. 7 is a schematic diagram of a dynamic virtual binocular model;
FIG. 8 is a schematic diagram of a multi-view image sequence stereo matching process;
in the figure, 1, a camera device, 11, a camera, 12, a camera support, 2, a rotary bipartite prism device, 21, a bipartite prism, 22, a prism support structure, 23, a torque motor rotor, 24, a torque motor brush, 25, a torque motor stator, 26 and an outer shell.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
Example 1
The embodiment provides a single-camera image acquisition system based on a rotary bipartite prism, which comprises a camera device and a rotary bipartite prism device, wherein the rotary bipartite prism device is used for changing the propagation direction of imaging light rays in a camera view field so as to generate two symmetrical imaging view angles, and the camera device is used for synchronously acquiring and recording target image information under the two imaging view angles; the camera device comprises a camera and a camera support, wherein the camera support is used for adjusting the posture and the angle of the camera; the rotary bipartite prism device comprises a bipartite prism assembly, a rotary mechanism and an outer shell; the rotating mechanism is used for driving the bisection prism assembly to rotate, and the outer shell is used for rotating the mechanism and protecting the bisection prism assembly; the axial distance between the camera device and the rotating bipartite prism device is allowed to be adjusted within a certain range, and more degrees of freedom are provided for multi-view image capturing and three-dimensional reconstruction.
Furthermore, the bipartite prism assembly comprises a bipartite prism and a prism support structure, the bipartite prism is arranged in the central area of the prism support structure in a glue bonding mode or a spring plate fixing mode, and the prism support structure is used for fixing and supporting the bipartite prism.
Furthermore, the rotating mechanism adopts a torque motor direct drive mode or a gear transmission mode, a synchronous belt transmission mode, a worm and gear transmission mode and the like, and the torque motor comprises a rotor and a stator; the two-prism assembly is in threaded connection with the torque motor rotor through the prism supporting structure, and the torque motor stator is installed on the outer shell in a threaded connection mode; the torque motor drives the bipartite prism assembly to rotate in the outer shell.
Furthermore, the target surface of the camera and the back surface of the bipartite prism meet the parallel relation, the optical axis of the camera is intersected and perpendicular with the top ridge line opposite to the back surface of the bipartite prism, and meanwhile, the view field of the camera is guaranteed not to be shielded by the rotating bipartite prism device.
The embodiment further provides a three-dimensional reconstruction method adopting the single-camera image acquisition system based on the rotating bipartite prism, which comprises the following steps:
s1, system construction and parameter calibration: constructing a single-camera imaging system and a working coordinate system thereof according to the relative position relationship between the camera and the bipartite prism, and acquiring internal parameters of the camera and the distance between the camera and the bipartite prism in the optical axis direction by using a visual calibration method;
s2, multi-view image sequence acquisition: the rotation angle change of the bipartite prism is realized by controlling the rotating mechanism, and a camera is utilized to collect double-view images containing target information at the rotation angle position of each bipartite prism to generate a multi-view target image sequence for three-dimensional reconstruction;
s3, stereo matching and cross optimization: establishing a polar constraint relation of the dual-view images corresponding to the corner positions of each bipartite prism by combining a dynamic virtual binocular system model, searching homonymous image points contained in the dual-view images through a window matching algorithm, and simultaneously performing cross check and optimizing a stereo matching result of the multi-view image sequence by using multi-polar constraint provided by image sequences corresponding to different bipartite prism corners;
s4, three-dimensional reconstruction and point cloud filtering: utilizing the homonymous image points contained in the collected image at the corner position of the specific bipartite prism, and calculating and recovering the position coordinates of the corresponding target point by combining the triangulation principle to obtain the initial estimation of the three-dimensional point cloud; and then, redundant stereo matching provided by images acquired at the corner positions of other bipartite prisms is utilized to supplement point cloud information missing in initial estimation, and noise possibly existing in the three-dimensional point cloud is gradually filtered.
Further, the step S1 specifically includes:
s11, constructing an imaging system consisting of a single camera and a rotary bipartite prism device, and sequentially adjusting the postures of the camera and the bipartite prism device to ensure the parallel relation between the target surface of the camera and the back surface of the bipartite prism, the vertical relation between the optical axis of the camera and the crest line at the top of the bipartite prism and the axial distance relation between the camera and the bipartite prism;
s12, establishing a working coordinate system O-XYZ of the imaging system, fixing an origin O at the optical center position of the camera, enabling a Z axis to coincide with the optical axis direction of the camera, enabling an X axis and a Y axis to be orthogonal to the Z axis, and enabling the X axis and the Y axis to respectively correspond to the row scanning direction and the column scanning direction of the camera image sensor;
s13, acquiring internal parameters of the camera and distortion coefficients of the lens by adopting a traditional vision calibration method such as a Zhangyingyou calibration method, a direct linear transformation method or a two-step calibration method, and adjusting the axial distance between the camera and the bipartite prism by the aid of measuring tools such as a vernier caliper and a laser interferometer.
Further, in step S2, the dichotomous prism assembly is driven by the rotating mechanism to successively reach m kinds of rotational angle positions, and the camera is triggered to acquire a corresponding dual-view image immediately after the dichotomous prism reaches the specified rotational angle position, wherein the motion control of the rotating mechanism and the image acquisition triggering of the camera are both realized by software.
Further, the step S3 specifically includes:
s31, calculating the direction of the camera visual axis after deflection of the bipartite prism by using a ray tracing method, and determining two imaging visual angles corresponding to the rotation angle of any bipartite prism;
s32, deducing a dynamic virtual binocular system model equivalent to the imaging system according to the change relation of the camera visual axis orientation along with the rotation angle of the bipartite prism, and determining the position posture and the motion rule of the virtual binocular system;
s33, calculating a basic matrix and a change rule of the dynamic virtual binocular system according to internal parameters and external parameters of the virtual binocular system under any bipartite prism corner by referring to a traditional binocular vision theory;
s34, deriving epipolar constraint relations among the dual-view images collected by the system under any bipartite prism corner according to the basic matrix of the dynamic virtual binocular system, and thus constructing multi-epipolar constraint relations of multi-view image sequences corresponding to different bipartite prism corners;
s35, polar line constraint between the left virtual camera and the right virtual camera at a specific prism corner position is utilized, meanwhile, a proper window matching algorithm is combined to search for the homonymous image points contained in the dual-view image, polar line constraint of the homonymous image points in the dual-view image corresponding to other prism corner positions is determined on the basis, and the homonymous image points with overlarge polar line intersection deviation are filtered according to the principle that the homonymous image points are theoretically located at the intersection point positions of a plurality of polar lines.
Further, in the step S31, the camera visual axis is deflected by the bipartite prism and then points to two symmetric directions d about the system optical axis directionLAnd dRThe ray tracing method can obtain:
Figure BDA0002715734980000091
wherein
Figure BDA0002715734980000092
And
Figure BDA0002715734980000093
are all intermediate variables, do=[0,0,1]TIs the optical axis direction of a single-camera imaging system, nLIs the normal vector of the left side of the bipartite prism, nRIs the normal vector of the right side of the bipartite prism, nBIs a normal vector of the back of the bipartite prism and is a side of the bipartite prismThe included angle between the two prisms and the back surface, n is the refractive index of the material used by the two-half prism, and omega is the rotation angle of the two-half prism; the normal vectors of the side surface and the back surface of the bipartite prism are respectively as follows:
Figure BDA0002715734980000094
further, in step S32, the dynamic virtual binocular system is composed of two symmetrically distributed virtual cameras, and is used to simplify and describe the process of acquiring the dual-view image by the cameras under the action of the rotating bipartite prism device; the internal parameters of the two virtual cameras are completely the same as those of the actually used cameras, the external parameters of the two virtual cameras mainly depend on the structural parameters and the motion parameters of the rotating bipartite prism, and the external parameters are expressed as follows under the rotating angle omega of any bipartite prism:
Figure BDA0002715734980000095
Figure BDA0002715734980000096
in the formula, RL(omega) is a rotation matrix of the left virtual camera relative to the actual camera under any bipartite prism rotation angle omega, tL(omega) is the translation vector of the left virtual camera relative to the actual camera under any bipartite prism rotation angle omega, RR(omega) is a rotation matrix of the right virtual camera relative to the actual camera under any bipartite prism rotation angle omega, tRAnd (omega) is a translation vector of the right virtual camera relative to the actual camera under an arbitrary rotation angle omega of the bipartite prism, Rot represents a certain angle of rotation around an axis direction defined by an outer product of two vectors, the angle is determined by a vector cosine law, and g is a distance from an optical center of the actual camera to the back of the bipartite prism.
Further, in step S33, a basic matrix exists between the left and right virtual cameras included in the dynamic virtual binocular system at any bipartite prism rotation angle ω:
F(ω)=(Aint)-T(RLR)-1TLR(Aint)-1
RLR=RL(ω)RR(ω)-1
tLR=tL-RL(ω)RR(ω)-1tR
in the formula, AintIs an internal parameter matrix, R, of the cameraLRIs a relative rotation matrix of the left virtual camera and the right virtual camera, TLRIs an oblique symmetric matrix corresponding to the relative translation vector.
Further, in step S34, the base matrix F (ω) of the left and right virtual cameras is multiplied by the homogeneous coordinates of the pixels included in one half of the dual view images, so as to obtain the positions of the pixels corresponding to the epipolar lines in the other half of the images; similarly, according to the change relationship between the left virtual camera position and the right virtual camera position and the half-prism rotation angle, the basic matrix and the corresponding polar line position between any two virtual camera positions under m kinds of half-prism rotation angles can be obtained by adopting the method, so that a series of redundant stereo matching constraint conditions are generated.
Further, in the step S35, the window matching algorithm may be selected from an existing Sum of Absolute Difference (SAD) algorithm, sum of squared error (SSD) algorithm, Normalized Cross Correlation (NCC) algorithm, and the like.
Further, the step S4 specifically includes:
s41, calculating initial three-dimensional point cloud distribution of the target by utilizing a triangulation principle according to a result of stereo matching of the dual-view image acquired at the first prism corner position;
s42, collecting each corresponding double-view-angle image at the corner positions of other prisms, and updating the three-dimensional point cloud information of the target by using a triangulation principle after completing stereo matching;
and S43, comparing the initial three-dimensional point cloud with the updated three-dimensional point cloud, supplementing data which are not contained in the initial estimation, continuously correcting and optimizing the three-dimensional point cloud corresponding to the image point with the same name by utilizing the gradually introduced redundant information, and filtering the data with larger deviation before and after updating as noise.
Further, in step S41, the corresponding three-dimensional point cloud is calculated according to the stereo matching result of the dual-view image, and the calculation process may be represented as:
Figure BDA0002715734980000101
in the formula, PiAs a three-dimensional point cloud collection
Figure BDA0002715734980000102
The three-dimensional coordinates of the medium element i,
Figure BDA0002715734980000103
the homogeneous pixel coordinates of the same-name image points contained in the left half of the dual-view image,
Figure BDA0002715734980000104
respectively homogeneous pixel coordinates of the image points of the same name contained in the right half of the dual-view image,
Figure BDA0002715734980000105
is a positive integer set; lambda [ alpha ]LIs composed of
Figure BDA0002715734980000106
Corresponding to the scale factor, λ, of the projected light vectorRIs composed of
Figure BDA0002715734980000111
The scale factors corresponding to the projected ray vectors can be eliminated by a simultaneous system of equations.
Further, in step S43, point cloud filtering is performed according to the deviation of the three-dimensional point cloud before and after updating, and each filtering calculation process is represented as:
Figure BDA0002715734980000112
in the formula (I), the compound is shown in the specification,
Figure BDA0002715734980000113
for filtered three-dimensional point cloud sets, PiestimateFor the initially estimated elements in the three-dimensional point cloud set, PiupdateTo update the elements in the computed three-dimensional point cloud set, ε is the deviation threshold between the updated point cloud and the initial estimate.
The embodiment also provides a specific implementation process of the single-camera image acquisition system and the three-dimensional reconstruction method based on the rotating bipartite prism, which are respectively described in detail below.
Single-camera image acquisition system based on rotary bipartite prism
As shown in fig. 1 to 5, the present embodiment provides a single-camera image capturing system based on a rotating bipartite prism, which includes a camera device and a rotating bipartite prism device. The camera device comprises a camera and a camera support, and the rotary bipartite prism device comprises a bipartite prism, a prism support structure, a rotary mechanism and an outer shell.
Thecamera device 1 specifically includes acamera 11 and acamera mount 12. Thecamera 11 is adjusted in position and attitude by thecamera mount 12 with the camera target surface parallel to the back surface of thehalf prism 21 and the viewing axis directed perpendicular to the top ridge of thehalf prism 21. Parameters such as the focal length, the field angle and the depth of field of thecamera 11 must be reasonably matched with parameters such as the included angle between the side surface and the back surface of thebipartite prism 21 and the refractive index so as to avoid the problem of field shielding.
The rotarybipartite prism device 2 comprises a bipartite prism assembly, a rotary mechanism and an outer housing. The bipartite prism assembly comprises abipartite prism 21 and aprism support structure 22, wherein thebipartite prism 21 is installed on a rectangular installation surface of the central area of theprism support structure 22 in a spring fixing or glue bonding mode, and theprism support structure 22 is provided with an arc-shaped slot hole in the circumferential direction to reduce the moment of inertia.
The rotating mechanism adopts a torque motor direct drive mode or a gear drive mode, a synchronous belt drive mode, a worm and gear drive mode and the like, and the torque motor direct drive mode is selected in the embodiment. The torque motor mainly comprises arotor 23, abrush 24 and astator 25, specifically, the bipartite prism assembly is fixedly connected with thetorque motor rotor 23 in a threaded connection mode, and thetorque motor stator 25 is fixed on the end face of theouter shell 26 in a threaded connection mode.
Theouter housing 26 provides fixing and protecting functions for the bipartite prism assembly and the torque motor, and the torque motor drives the bipartite prism assembly inside to rotate.
The axial distance between thecamera device 1 and the rotarybipartite prism device 2 can be dynamically adjusted according to specific application occasions and requirements, longitudinal change freedom degree is provided for a multi-view image capturing process, and richer image information is provided for a three-dimensional calculation reconstruction process.
This embodiment introduces rotatory bipartite prism device in camera the place ahead, can adjust the visual axis of camera pointing and formation of image visual angle wantonly through the beam splitting effect and the full circular rotary motion of bipartite prism to gather the multi-view target image sequence that contains abundant information, can effectively promote the precision and the efficiency that multi-view stereo matching and three-dimensional are rebuild. Compared with the existing single-camera three-dimensional reconstruction system using the cooperative marker or the reflector group, the three-dimensional reconstruction system of the embodiment does not need to use the cooperative marker as prior information, does not introduce a reflecting element sensitive to error disturbance, and can realize better structural compactness, imaging flexibility and environmental adaptability.
Single-camera multi-view three-dimensional reconstruction method based on rotating bipartite prism
As shown in fig. 6 to 8, the present embodiment provides a three-dimensional reconstruction method using the above single-camera image acquisition system based on a rotating bipartite prism, which specifically includes the following steps:
s1, system construction and parameter calibration
S11, constructing an imaging system consisting of thecamera device 1 and the rotarybipartite prism device 2, and sequentially adjusting the postures of thecamera 11 and thebipartite prism 21 to ensure the parallel relation between the target surface of the camera and the back surface of the bipartite prism, the vertical relation between the optical axis of the camera and the crest line of the top of the bipartite prism and the axial distance relation between the camera and the bipartite prism;
s12, establishing a working coordinate system O-XYZ of the imaging system, fixing an origin O at the optical center position of the camera, enabling a Z axis to coincide with the optical axis direction of the camera, enabling an X axis and a Y axis to be orthogonal to the Z axis, and enabling the X axis and the Y axis to respectively correspond to the row scanning direction and the column scanning direction of the camera image sensor;
s13, obtaining internal parameters of the camera and distortion coefficients of the lens by using a traditional visual calibration method such as a zhangzhengyou calibration method, a direct linear transformation method, or a two-step calibration method, which is adopted in this embodiment; determining the axial distance between the camera and the bipartite prism through measuring tools such as a vernier caliper and a laser interferometer, wherein the vernier caliper is adopted in the embodiment; the calibration method and the measurement method are mature methods in the prior art, and are not developed any more.
S2, Multi-view image sequence acquisition
The motion rule of the rotating mechanism is controlled by software, so that the bipartite prism is sequentially rotated to m-3 rotation angle positions which are recorded as omega1=0°、ω245 ° and ω390 °; and after the bipartite prism reaches the specified corner position, triggering the camera to acquire a corresponding double-view image containing target information through software, and finally obtaining a multi-view target image sequence for three-dimensional reconstruction.
S3, stereo matching and cross optimization
Establishing a polar constraint relation of the dual-view images corresponding to the corner positions of each bipartite prism by combining a dynamic virtual binocular system model, searching homonymous image points contained in the dual-view images through a window matching algorithm, and simultaneously performing cross check and optimizing a stereo matching result of the multi-view image sequence by using multi-polar constraint provided by image sequences corresponding to different bipartite prism corners;
s31, calculating the direction of the camera visual axis after deflection of the bipartite prism by using a ray tracing method, and determining two imaging visual angles corresponding to the rotation angle of any bipartite prism; the camera visual axis deflected by the bipartite prism is directed in two directions d symmetrical with respect to the system optical axis directionLAnd dRDerived from vector refraction law:
Figure BDA0002715734980000131
wherein
Figure BDA0002715734980000132
And
Figure BDA0002715734980000133
are all intermediate variables, do=[0,0,1]TIs the optical axis direction of a single-camera imaging system, nLIs the normal vector of the left side of the bipartite prism, nRIs the normal vector of the right side of the bipartite prism, nBThe normal vector of the back of the bipartite prism is an included angle between the side face and the back of the bipartite prism, n is a refractive index of a material used by the bipartite prism, and omega is a rotation angle of the bipartite prism; the normal vectors of the side surface and the back surface of the bipartite prism are respectively as follows:
Figure BDA0002715734980000134
in this embodiment, the angle between the side surface and the back surface of the bipartite prism is α 5 °, and the refractive index n is 1.52.
S32, deducing a dynamic virtual binocular system model equivalent to the imaging system according to the change relation of the camera visual axis orientation along with the rotation angle of the bipartite prism; the dynamic virtual binocular system consists of two virtual cameras which are symmetrically distributed and is used for simplifying and describing the process of acquiring the double-view-angle images by the cameras under the action of the rotating bipartite prism device; the internal parameters of the two virtual cameras are completely the same as those of the actually used cameras, the external parameters of the two virtual cameras mainly depend on the structural parameters and the motion parameters of the rotating bipartite prism, and the external parameters are expressed as follows under the rotating angle omega of any bipartite prism:
Figure BDA0002715734980000135
Figure BDA0002715734980000136
in the formula, RL(omega) is left side virtualThe rotation matrix t of the simulated camera relative to the actual camera under any rotation angle omega of the bipartite prismL(omega) is the translation vector of the left virtual camera relative to the actual camera under any bipartite prism rotation angle omega, RR(omega) is a rotation matrix of the right virtual camera relative to the actual camera under any bipartite prism rotation angle omega, tRAnd (omega) is a translation vector of the right virtual camera relative to the actual camera under an arbitrary rotation angle omega of the bipartite prism, Rot represents a certain angle of rotation around an axis direction defined by an outer product of two vectors, the angle is determined by a vector cosine law, and g is a distance from an optical center of the actual camera to the back of the bipartite prism.
S33, referring to the traditional binocular vision theory, according to the internal parameters and the external parameters of the virtual binocular system under the corner omega of any bipartite prism, the left virtual camera and the right virtual camera contained in the dynamic virtual binocular system meet the basic matrix:
F(ω)=(Aint)-T(RLR)-1TLR(Aint)-1
RLR=RL(ω)RR(ω)-1
tLR=tL-RL(ω)RR(ω)-1tR
in the formula, AintIs an internal parameter matrix, R, of the cameraLRIs a relative rotation matrix of the left virtual camera and the right virtual camera, TLRIs an oblique symmetric matrix corresponding to the relative translation vector.
S34, enabling the basic matrix F (omega) of the left virtual camera and the right virtual camera in the dynamic virtual binocular system and the bipartite prism to have a rotation angle omega1When the angle is 0 degrees, the homogeneous coordinates of image points contained in the left half part and the right half part in the collected image are multiplied, and the position of the image point corresponding to the epipolar line in the right half part and the left half part of the image can be obtained; and then according to the relation between the left virtual camera position and the right virtual camera position and the half-prism corner, a basic matrix between any two virtual camera positions under different half-prism corners and the corresponding polar line position can be obtained, so that a series of redundant stereo matching constraint conditions are generated.
S35, utilizing the corner position of the prism as omega1Finding homonymous image points contained in the two-view image by combining an absolute Sum of Absolute Differences (SAD) algorithm, a sum of squared errors (SSD) algorithm, a Normalized Cross Correlation (NCC) algorithm and other window matching algorithms as epipolar constraint between the left virtual camera and the right virtual camera at 0 DEG
Figure BDA0002715734980000141
And
Figure BDA0002715734980000142
in this embodiment, an SAD algorithm is adopted; on the basis, the image points with the same name are determined
Figure BDA0002715734980000143
And
Figure BDA0002715734980000144
at omega245 ° and ω3Acquiring epipolar constraints within the image at 90 DEG, for each group of like-named image points
Figure BDA0002715734980000145
And
Figure BDA0002715734980000146
can determine 5 corresponding epipolar line positions; according to the principle that the homonymous image point of each visual angle is theoretically located at the intersection point position of 5 corresponding polar lines, filtering out homonymous image points with the polar line intersection point deviation exceeding a threshold value delta which is 0.5 pixel.
S4, three-dimensional reconstruction and point cloud filtering
S41, utilizing the corner position of the prism as omega1Corresponding image points pi of the acquired image at 0 DEGL1And piR1Calculating and recovering the position coordinates of the corresponding target points by combining a triangulation principle to obtain initial estimation of the three-dimensional point cloud; each target point PiThe calculation process of (a) can be expressed as:
Figure BDA0002715734980000147
in the formula,PiAs a three-dimensional point cloud collection
Figure BDA0002715734980000148
The three-dimensional coordinates of the medium element i,
Figure BDA0002715734980000149
the homogeneous pixel coordinates of the same-name image points contained in the left half of the dual-view image,
Figure BDA00027157349800001410
respectively homogeneous pixel coordinates of the image points of the same name contained in the right half of the dual-view image,
Figure BDA00027157349800001411
is a positive integer set; lambda [ alpha ]LAnd λRAre respectively as
Figure BDA00027157349800001412
And
Figure BDA00027157349800001413
the scale factors corresponding to the projected ray vectors can be eliminated by a simultaneous system of equations.
S42, sequentially utilizing the corner position of the prism as omega245 ° and ω3Recalculating the three-dimensional point cloud according to the stereo matching result of the acquired image when the angle is 90 degrees, wherein the calculation method is the same as the step S41;
s43, comparing the previous three-dimensional point cloud with the updated three-dimensional point cloud after each calculation is completed, and supplementing the originally contained data; gradually correcting and optimizing the distribution condition of the three-dimensional target point cloud by utilizing the corresponding relation of the image points with the same name, and simultaneously, taking the data with larger deviation before and after updating as noise for filtering, wherein the point cloud filtering process is expressed as follows:
Figure BDA0002715734980000151
in the formula (I), the compound is shown in the specification,
Figure BDA0002715734980000152
for filtered three-dimensional point cloud sets, PiestimateFor the initially estimated elements in the three-dimensional point cloud set, PiupdateFor updating elements in the calculated three-dimensional point cloud set, epsilon is a deviation threshold between the updated point cloud and the initial estimate; in this embodiment, the deviation threshold value ∈ is 1 mm.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (10)

1. A single-camera image acquisition system based on a rotary bipartite prism, comprising a camera device (1) and a rotary bipartite prism device (2), the camera device (1) comprising a camera (11) and a camera support (12) supporting the camera;
the rotary bipartite prism device (2) comprises a bipartite prism (21), a prism supporting structure (22), a rotating mechanism and an outer shell (26) for supporting the rotary bipartite prism device, wherein the bipartite prism (21) is fixedly arranged in the central area of the prism supporting structure (22), and the output end of the rotating mechanism is connected with the prism supporting structure (22) and is used for driving the prism supporting structure (22) to rotate on a vertical plane;
the detection end of the camera (11) is aligned with the bipartite prism (21).
2. The single-camera image acquisition system based on the rotating bipartite prism, according to claim 1, wherein the target surface of the camera (11) and the back surface of the bipartite prism (21) satisfy a parallel relationship, and the optical axis of the camera (11) intersects and is perpendicular to the top ridge line opposite to the back surface of the bipartite prism (21).
3. The single-camera image acquisition system based on the rotary bipartite prism of claim 1, wherein the rotary mechanism is a torque motor, and comprises a torque motor rotor (23), a torque motor brush (24) and a torque motor stator (25), the prism support structure (22) is connected with the torque motor rotor (23), and the torque motor stator (25) is installed on the outer shell (26).
4. A three-dimensional reconstruction method using a rotating bipartite prism based single camera image acquisition system according to claim 1, comprising the steps of:
system construction and parameter calibration: adjusting the position and the posture of the camera device and the rotary bipartite prism device to construct a single-camera imaging system and a working coordinate system thereof; acquiring internal parameters of the camera and an axial distance between the camera and the bipartite prism by using a visual calibration method;
acquiring a multi-view image sequence: the rotating mechanism is controlled to drive the bipartite prism to rotate, and the camera is used for collecting double-view images containing target information at the corner position of each bipartite prism to form a multi-view target image sequence for three-dimensional reconstruction;
stereo matching and cross optimization: deducing a dynamic virtual binocular system model equivalent to the single-camera imaging system according to the direction of a camera visual axis after deflection of a bipartite prism and the rotation angle of the bipartite prism, establishing an epipolar constraint relation of a double-visual-angle image corresponding to each bipartite prism rotation angle position, searching a homonymy image point in the double-visual-angle image through a window matching algorithm, and performing cross inspection and optimization on the homonymy image point in the double-visual-angle images at different bipartite prism rotation angle positions to realize stereo matching;
three-dimensional reconstruction and point cloud filtering: acquiring initial estimation of three-dimensional point cloud of a target according to the homonymous image point of the double-view-angle image corresponding to the corner position of the bipartite prism; and supplementing point cloud information missing from the initially estimated three-dimensional point cloud according to the homonymous image points of the double-view-angle images corresponding to the corner positions of the other bipartite prisms, so as to update the three-dimensional point cloud, and then carrying out noise filtering to obtain a final three-dimensional point cloud reconstruction result.
5. The method according to claim 4, wherein in the system construction and parameter calibration step, the position and posture of the camera device and the rotating bipartite prism device are adjusted to ensure the parallel relationship between the camera target surface and the back surface of the bipartite prism, the perpendicular relationship between the camera optical axis and the top ridge line of the bipartite prism, and the axial distance relationship between the camera and the bipartite prism;
specifically, the working coordinate system of the single-camera imaging system is established, an origin O is fixed at the optical center position of the camera, a Z axis coincides with the optical axis direction of the camera, an X axis and a Y axis are both orthogonal to the Z axis, the X axis corresponds to the line scanning direction of the camera image sensor, and the Y axis corresponds to the column scanning direction of the camera image sensor.
6. The method according to claim 4, wherein in the stereo matching and cross optimization step, the derivation process of the dynamic virtual binocular system model is specifically,
calculating two symmetrical directions d about the optical axis direction of the single-camera imaging system after the camera visual axis is deflected by the bipartite prism by using a ray tracing methodLAnd dRDetermining two imaging visual angles corresponding to the rotation angle of any bipartite prism; deriving the dynamic virtual binocular system model according to the change relation of the camera visual axis orientation along with the rotation angle of the bipartite prism;
the calculation expression of the direction of the camera visual axis after deflection of the bipartite prism is as follows:
Figure FDA0002715734970000021
Figure FDA0002715734970000022
Figure FDA0002715734970000023
do=[0,0,1]T
Figure FDA0002715734970000024
in the formula (d)oIs the optical axis direction, n, of the single-camera imaging systemLIs the normal vector of the left side of the bipartite prism, nRIs the normal vector of the right side of the bipartite prism, nBThe normal vector of the back of the bipartite prism is an included angle between the side face and the back of the bipartite prism, n is a refractive index of a material used by the bipartite prism, and omega is a rotation angle of the bipartite prism;
the dynamic virtual binocular system model comprises a left virtual camera alpha machine and a right virtual camera, and the calculation expressions of the rotation matrix and the translation vector of the left virtual camera and the right virtual camera relative to the actual camera under any bipartite prism rotation angle omega are as follows:
Figure FDA0002715734970000031
Figure FDA0002715734970000032
in the formula, RL(omega) is a rotation matrix of the left virtual camera relative to the actual camera under any bipartite prism rotation angle omega, tL(omega) is the translation vector of the left virtual camera relative to the actual camera under any bipartite prism rotation angle omega, RR(omega) is a rotation matrix of the right virtual camera relative to the actual camera under any bipartite prism rotation angle omega, tR(omega) is relative to the actual camera under any bipartite prism rotation angle omega of the virtual camera on the right sideThe translation vector, Rot, represents the rotation around the axis direction defined by the outer product of the two vectors by a certain angle, which is determined by the law of cosine of the vector, and g is the distance from the optical center of the actual camera to the back of the bipartite prism.
7. The method according to claim 6, wherein in the system construction and parameter calibration step, the stereo matching and cross optimization step, the dynamic virtual binocular system model comprises a basis matrix between the left virtual camera and the right virtual camera at any bipartite prism rotation angle ω, and the calculation expression of the basis matrix is as follows:
F(ω)=(Aint)-T(RLR)-1TLR(Aint)-1
RLR=RL(ω)RR(ω)-1
tLR=tL-RL(ω)RR(ω)-1tR
in the formula, AintIs an internal parameter matrix, R, of the cameraLRIs a relative rotation matrix of the left virtual camera and the right virtual camera, TLRAn oblique symmetric matrix corresponding to the relative translation vector;
and multiplying the base matrix F (omega) of the left virtual camera and the right virtual camera by the homogeneous coordinates of the image points contained in one half part of the dual-view angle image to obtain the positions of the image points corresponding to the epipolar lines in the other half part of the image, thereby obtaining the epipolar constraint relation.
8. The method as claimed in claim 4, wherein in the step of stereo matching and cross optimization, the cross inspection and optimization is implemented by filtering out the homonymous image points with too large deviation from the epipolar lines according to the principle that the homonymous image points are theoretically located at the intersection points of the epipolar lines.
9. The method of claim 4, wherein in the three-dimensional reconstruction and point cloud filtering steps, the computational expression of the initial estimation of the three-dimensional point cloud is as follows:
Figure FDA0002715734970000041
in the formula, PiAs a three-dimensional point cloud collection
Figure FDA0002715734970000042
The three-dimensional coordinates of the medium element i,
Figure FDA0002715734970000043
the homogeneous pixel coordinates of the same-name image points contained in the left half of the dual-view image,
Figure FDA0002715734970000044
respectively homogeneous pixel coordinates of the image points of the same name contained in the right half of the dual-view image,
Figure FDA0002715734970000045
is a positive integer set; lambda [ alpha ]LIs composed of
Figure FDA0002715734970000046
Corresponding to the scale factor, λ, of the projected light vectorRIs composed of
Figure FDA0002715734970000047
Corresponding to the scale factor of the projected ray vector.
10. The method according to claim 4, wherein in the three-dimensional reconstruction and point cloud filtering step, the noise filtering is specifically performed by performing point cloud filtering according to the deviation of the three-dimensional point cloud before and after updating, and each time the calculation process of the point cloud filtering is represented as:
Figure FDA0002715734970000048
in the formula (I), the compound is shown in the specification,
Figure FDA0002715734970000049
for filtered three-dimensional point cloud sets, PiestimateFor the initially estimated elements in the three-dimensional point cloud set, PiupdateTo update the elements in the computed three-dimensional point cloud set, ε is the deviation threshold between the updated point cloud and the initial estimate.
CN202011072913.6A2020-10-092020-10-09 Single-camera image acquisition system and three-dimensional reconstruction method based on rotating bisection prismActiveCN112330794B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202011072913.6ACN112330794B (en)2020-10-092020-10-09 Single-camera image acquisition system and three-dimensional reconstruction method based on rotating bisection prism

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202011072913.6ACN112330794B (en)2020-10-092020-10-09 Single-camera image acquisition system and three-dimensional reconstruction method based on rotating bisection prism

Publications (2)

Publication NumberPublication Date
CN112330794Atrue CN112330794A (en)2021-02-05
CN112330794B CN112330794B (en)2022-06-14

Family

ID=74314749

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202011072913.6AActiveCN112330794B (en)2020-10-092020-10-09 Single-camera image acquisition system and three-dimensional reconstruction method based on rotating bisection prism

Country Status (1)

CountryLink
CN (1)CN112330794B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113156641A (en)*2021-02-242021-07-23同济大学Image space scanning imaging method based on achromatic cascade prism
CN113885195A (en)*2021-08-172022-01-04屏丽科技成都有限责任公司Color field correction method for eliminating image deviation of light combination prism
CN113971719A (en)*2021-10-262022-01-25上海脉衍人工智能科技有限公司System, method and equipment for sampling and reconstructing nerve radiation field
CN114157852A (en)*2021-11-302022-03-08北京理工大学 A three-dimensional imaging method and system of virtual camera array based on rotating biprism
CN114937099A (en)*2022-05-312022-08-23北京理工大学Three-dimensional calculation ghost imaging method and system based on rotatable prism combination
CN115063567A (en)*2022-08-192022-09-16中国石油大学(华东) A three-dimensional optical path analysis method for a biprism monocular stereo vision system
CN116156146A (en)*2023-03-212023-05-23同济大学 A three-dimensional imaging method with a dynamic virtual camera
WO2023195593A1 (en)*2022-04-072023-10-12(주)인텍플러스Shape profile measuring device using line beam
CN119728942A (en)*2024-12-192025-03-28同济大学 A device for generating dynamic multi-eye virtual camera

Citations (12)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20130101276A1 (en)*2011-10-212013-04-25Raytheon CompanySingle axis gimbal optical stabilization system
CN103142202A (en)*2013-01-212013-06-12东北大学Prism-based medical endoscope system with measurement function and method
CN105700320A (en)*2016-04-132016-06-22苏州大学Holographic three-dimensional display method and device based on spatial light modulator
CN105938318A (en)*2016-05-302016-09-14苏州大学Color holographic three-dimensional display method and system based on time division multiplexing
CN107014307A (en)*2017-04-172017-08-04深圳广田机器人有限公司The acquisition methods of three-dimensional laser scanner and three-dimensional information
CN108253939A (en)*2017-12-192018-07-06同济大学Variable optical axis single eye stereo vision measuring method
CN109668509A (en)*2019-01-182019-04-23南京理工大学Based on biprism single camera three-dimensional measurement industrial endoscope system and measurement method
CN110111262A (en)*2019-03-292019-08-09北京小鸟听听科技有限公司A kind of projector distortion correction method, device and projector
CN110243283A (en)*2019-05-302019-09-17同济大学 A variable boresight visual measurement system and method
CN110336987A (en)*2019-04-032019-10-15北京小鸟听听科技有限公司A kind of projector distortion correction method, device and projector
CN110570463A (en)*2019-09-112019-12-13深圳市道通智能航空技术有限公司target state estimation method and device and unmanned aerial vehicle
CN111416972A (en)*2020-01-212020-07-14同济大学 A three-dimensional imaging system and method based on axially adjustable cascade rotating mirrors

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20130101276A1 (en)*2011-10-212013-04-25Raytheon CompanySingle axis gimbal optical stabilization system
CN103142202A (en)*2013-01-212013-06-12东北大学Prism-based medical endoscope system with measurement function and method
CN105700320A (en)*2016-04-132016-06-22苏州大学Holographic three-dimensional display method and device based on spatial light modulator
CN105938318A (en)*2016-05-302016-09-14苏州大学Color holographic three-dimensional display method and system based on time division multiplexing
CN107014307A (en)*2017-04-172017-08-04深圳广田机器人有限公司The acquisition methods of three-dimensional laser scanner and three-dimensional information
CN108253939A (en)*2017-12-192018-07-06同济大学Variable optical axis single eye stereo vision measuring method
CN109668509A (en)*2019-01-182019-04-23南京理工大学Based on biprism single camera three-dimensional measurement industrial endoscope system and measurement method
CN110111262A (en)*2019-03-292019-08-09北京小鸟听听科技有限公司A kind of projector distortion correction method, device and projector
CN110336987A (en)*2019-04-032019-10-15北京小鸟听听科技有限公司A kind of projector distortion correction method, device and projector
CN110243283A (en)*2019-05-302019-09-17同济大学 A variable boresight visual measurement system and method
CN110570463A (en)*2019-09-112019-12-13深圳市道通智能航空技术有限公司target state estimation method and device and unmanned aerial vehicle
CN111416972A (en)*2020-01-212020-07-14同济大学 A three-dimensional imaging system and method based on axially adjustable cascade rotating mirrors

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ANHU LI等: ""Investigation of inverse solutions for tilting orthognal double prisms in laser pointing with submicroradian precision"", 《JOURNAL OF LIGHTWAVE TECHNOLOGY》*

Cited By (13)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113156641A (en)*2021-02-242021-07-23同济大学Image space scanning imaging method based on achromatic cascade prism
CN113885195A (en)*2021-08-172022-01-04屏丽科技成都有限责任公司Color field correction method for eliminating image deviation of light combination prism
CN113885195B (en)*2021-08-172023-10-03成都九天画芯科技有限公司Color field correction method for eliminating image deviation of light combining prism
CN113971719A (en)*2021-10-262022-01-25上海脉衍人工智能科技有限公司System, method and equipment for sampling and reconstructing nerve radiation field
CN113971719B (en)*2021-10-262024-04-12上海脉衍人工智能科技有限公司System, method and equipment for sampling and reconstructing nerve radiation field
CN114157852B (en)*2021-11-302022-12-13北京理工大学 A three-dimensional imaging method and system of a virtual camera array based on a rotating biprism
CN114157852A (en)*2021-11-302022-03-08北京理工大学 A three-dimensional imaging method and system of virtual camera array based on rotating biprism
WO2023195593A1 (en)*2022-04-072023-10-12(주)인텍플러스Shape profile measuring device using line beam
CN114937099A (en)*2022-05-312022-08-23北京理工大学Three-dimensional calculation ghost imaging method and system based on rotatable prism combination
CN114937099B (en)*2022-05-312024-09-03北京理工大学 A three-dimensional computational ghost imaging method and system based on a rotatable prism combination
CN115063567A (en)*2022-08-192022-09-16中国石油大学(华东) A three-dimensional optical path analysis method for a biprism monocular stereo vision system
CN116156146A (en)*2023-03-212023-05-23同济大学 A three-dimensional imaging method with a dynamic virtual camera
CN119728942A (en)*2024-12-192025-03-28同济大学 A device for generating dynamic multi-eye virtual camera

Also Published As

Publication numberPublication date
CN112330794B (en)2022-06-14

Similar Documents

PublicationPublication DateTitle
CN112330794A (en)Single-camera image acquisition system based on rotary bipartite prism and three-dimensional reconstruction method
CN111442721B (en) A calibration device and method based on multi-laser ranging and angle measurement
Akbarzadeh et al.Towards urban 3d reconstruction from video
CN109211107B (en)Measuring device, rotating body and method for generating image data
CN108648232B (en)Binocular stereoscopic vision sensor integrated calibration method based on precise two-axis turntable
CN109919911B (en)Mobile three-dimensional reconstruction method based on multi-view photometric stereo
US6643396B1 (en)Acquisition of 3-D scenes with a single hand held camera
CN111416972B (en)Three-dimensional imaging system and method based on axially adjustable cascade rotating mirror
CN111854636B (en)Multi-camera array three-dimensional detection system and method
CN110243283A (en) A variable boresight visual measurement system and method
WO2018076154A1 (en)Spatial positioning calibration of fisheye camera-based panoramic video generating method
Frahm et al.Pose estimation for multi-camera systems
CN111429523B (en)Remote calibration method in 3D modeling
CN111445529B (en)Calibration equipment and method based on multi-laser ranging
CN111879354A (en)Unmanned aerial vehicle measurement system that becomes more meticulous
CN110419208B (en)Imaging system, imaging control method, image processing apparatus, and computer readable medium
CN113781576A (en) Binocular vision detection system, method and device for real-time adjustment of multi-degree-of-freedom pose
Iocchi et al.A multiresolution stereo vision system for mobile robots
CN108955642B (en) A seamless stitching method for large-format equivalent central projection images
CN116156146A (en) A three-dimensional imaging method with a dynamic virtual camera
Dang et al.Self-calibration for active automotive stereo vision
Strelow et al.Extending shape-from-motion to noncentral onmidirectional cameras
Castanheiro et al.Modeling hyperhemispherical points and calibrating a dual-fish-eye system for close-range applications
Traffelet et al.Target-based calibration of underwater camera housing parameters
WO2020244273A1 (en)Dual camera three-dimensional stereoscopic imaging system and processing method

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp