Movatterモバイル変換


[0]ホーム

URL:


CN113808216A - Camera calibration method and device, electronic device and storage medium - Google Patents

Camera calibration method and device, electronic device and storage medium
Download PDF

Info

Publication number
CN113808216A
CN113808216ACN202111011090.0ACN202111011090ACN113808216ACN 113808216 ACN113808216 ACN 113808216ACN 202111011090 ACN202111011090 ACN 202111011090ACN 113808216 ACN113808216 ACN 113808216A
Authority
CN
China
Prior art keywords
camera
image
plane
target plane
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111011090.0A
Other languages
Chinese (zh)
Other versions
CN113808216B (en
Inventor
王潇峰
刘余钱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Lingang Intelligent Technology Co LtdfiledCriticalShanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority to CN202111011090.0ApriorityCriticalpatent/CN113808216B/en
Publication of CN113808216ApublicationCriticalpatent/CN113808216A/en
Application grantedgrantedCritical
Publication of CN113808216BpublicationCriticalpatent/CN113808216B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The disclosure relates to a camera calibration method and device, an electronic device and a storage medium. The method comprises the following steps: acquiring internal parameters of a camera and the height of the camera relative to a target plane; determining three-dimensional coordinates of a plurality of space points in a camera coordinate system of the camera according to the image acquired by the camera; determining a normal vector of the target plane according to the three-dimensional coordinates of the plurality of space points in the camera coordinate system; determining the attitude angle of the camera relative to the target plane according to the normal vector of the target plane; determining a homography matrix of the camera to the target plane according to the internal parameters, the height and the attitude angle.

Description

Camera calibration method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of camera technologies, and in particular, to a camera calibration method and apparatus, an electronic device, and a storage medium.
Background
In an application scenario of automatic driving, it is generally required to sense environmental information around a vehicle, such as lane lines, obstacles, and the like, using a monocular camera. Since the monocular camera cannot directly recover the three-dimensional scale of each object (e.g., object or person) around the vehicle, it is generally assumed that the vehicle travels on a local road plane, and the three-dimensional scale under the mapped road plane coordinate system is obtained through a pre-calibrated homography matrix from the camera to the road plane. Therefore, the accurate homography matrix has important significance for application scenes such as automatic driving.
Disclosure of Invention
The present disclosure provides a camera calibration technical scheme.
According to an aspect of the present disclosure, there is provided a camera calibration method, including:
acquiring internal parameters of a camera and the height of the camera relative to a target plane;
determining three-dimensional coordinates of a plurality of space points in a camera coordinate system of the camera according to the image acquired by the camera;
determining a normal vector of the target plane according to the three-dimensional coordinates of the plurality of space points in the camera coordinate system;
determining the attitude angle of the camera relative to the target plane according to the normal vector of the target plane;
determining a homography matrix of the camera to the target plane according to the internal parameters, the height and the attitude angle.
The calibration method comprises the steps of obtaining internal parameters of a camera and the height of the camera relative to a target plane, determining three-dimensional coordinates of a plurality of space points under a camera coordinate system of the camera according to an image collected by the camera, determining a normal vector of the target plane according to the three-dimensional coordinates of the space points under the camera coordinate system, determining a posture angle of the camera relative to the target plane according to the normal vector of the target plane, and determining a homography matrix from the camera to the target plane according to the internal parameters, the height and the posture angle. In addition, by adopting the camera calibration method provided by the embodiment of the disclosure, the problem of manual dependence in the calibration process can be solved, the calibration process is automated, manual participation is not needed, and thus the probability of error occurrence can be reduced. Moreover, the problem of dependence of the artificial reference object can be solved, the artificial reference object does not depend on a special calibration scene any more, and the method can be applied to wider application scenes.
In one possible implementation manner, the determining a normal vector of the target plane according to three-dimensional coordinates of the plurality of spatial points in the camera coordinate system includes:
extracting a plurality of candidate planes according to the three-dimensional coordinates of the plurality of space points in the camera coordinate system;
and determining the normal vector of the target plane according to the normal vectors of the candidate planes.
In this implementation, a plurality of candidate planes are extracted according to the three-dimensional coordinates of the plurality of spatial points in the camera coordinate system, and the normal vector of the target plane is determined according to the normal vectors of the plurality of candidate planes, so that the normal vector of the target plane can be determined efficiently and accurately.
In a possible implementation manner, the determining a normal vector of the target plane according to normal vectors of the candidate planes includes:
screening out potential planes of the target plane from the candidate planes according to the motion direction of the camera and normal vectors of the candidate planes;
and determining the normal vector of the target plane according to the normal vector of the potential plane.
In this implementation, the potential plane of the target plane is screened out from the plurality of candidate planes according to the motion direction of the camera and the normal vectors of the plurality of candidate planes, and the normal vector of the target plane is determined according to the normal vector of the potential plane, so that the accuracy of the determined normal vector of the target plane can be further improved, and the speed of determining the normal vector of the target plane can be improved.
In a possible implementation manner, the screening out a potential plane of the target plane from the candidate planes according to the motion direction of the camera and normal vectors of the candidate planes includes:
for any of the plurality of candidate planes, determining the candidate plane as a potential plane of the target plane in response to a normal vector of the candidate plane being approximately perpendicular to a forward motion direction vector of the camera.
In this implementation, for any candidate plane in the plurality of candidate planes, the candidate plane is determined as a potential plane of the target plane in response to that the normal vector of the candidate plane is approximately perpendicular to the forward motion direction vector of the camera, so that the potential plane of the target plane can be quickly and accurately screened out from a large number of candidate planes.
In a possible implementation manner, the determining a normal vector of the target plane according to a normal vector of the potential plane includes:
in the case that a plurality of potential planes exist, for any potential plane in the plurality of potential planes, determining the support number of the potential plane according to the normal vectors of the plurality of potential planes, wherein the support number of the potential plane represents the number of potential planes which are approximately parallel to the potential plane in the plurality of potential planes;
and determining the normal vector of the target plane according to the support numbers of the plurality of potential planes.
In this implementation, in this example, in a case where there are multiple potential planes, for any potential plane in the multiple potential planes, the support number of the potential plane is determined according to the normal vectors of the multiple potential planes, and the normal vector of the target plane is determined according to the support numbers of the multiple potential planes, so that the accuracy of the determined normal vector of the target plane can be improved.
In a possible implementation manner, the determining a normal vector of the target plane according to the number of supports of the plurality of potential planes includes:
and determining the normal vector of the potential plane with the maximum support number in the plurality of potential planes as the normal vector of the target plane.
In this implementation, the normal vector of the potential plane with the largest number of supports among the plurality of potential planes is determined as the normal vector of the target plane, so that the accuracy of the determined normal vector of the target plane can be improved, that is, the accuracy of the determined target plane can be improved, and the accuracy of the camera calibration result can be improved.
In one possible implementation, the determining three-dimensional coordinates of a plurality of spatial points in a camera coordinate system of the camera according to the image acquired by the camera includes:
acquiring a first image and a second image acquired by the camera in a motion state;
determining a first rotation matrix and a translation vector of the camera motion from the first image and the second image;
and determining three-dimensional coordinates of a plurality of space points in a camera coordinate system of the camera according to the first rotation matrix and the translation vector.
In this implementation, by acquiring a first image and a second image captured by the camera in a moving state, determining a first rotation matrix and a translation vector of the camera motion according to the first image and the second image, and determining three-dimensional coordinates of a plurality of spatial points in the camera coordinate system of the camera according to the first rotation matrix and the translation vector, three-dimensional coordinates of a plurality of spatial points in the camera coordinate system of the camera can be determined quickly and accurately based on the image captured by the camera.
In one possible implementation, the determining a first rotation matrix and a translation vector of the camera motion from the first image and the second image includes:
extracting feature points of the first image and feature points of the second image;
determining an essential matrix or a basic matrix from the first image to the second image according to the matching relation between the characteristic points of the first image and the characteristic points of the second image;
determining a first rotation matrix and a translation vector of the camera motion from the essential matrix or the fundamental matrix.
In this implementation, by extracting feature points of the first image and feature points of the second image, determining an essential matrix or a fundamental matrix from the first image to the second image according to a matching relationship between the feature points of the first image and the feature points of the second image, and determining a first rotation matrix and a translation vector of the camera motion according to the essential matrix or the fundamental matrix, the first rotation matrix and the translation vector of the camera motion can be accurately determined, and thus the accuracy of three-dimensional coordinates of the determined spatial points in a camera coordinate system of the camera can be improved.
In one possible implementation, the determining a homography matrix of the camera to the target plane according to the internal parameters, the height, and the attitude angle includes:
determining a second rotation matrix of the camera from a front view state to a top view state according to the attitude angle;
converting a third image acquired by the camera in the forward-looking state into a fourth image, wherein the fourth image is an image looking down on the target plane;
acquiring image coordinates of a first image point in the third image and image coordinates of a second image point in the fourth image, wherein the first image point and the second image point are corresponding image points;
and determining a homography matrix of the camera to the target plane according to the image coordinates of the first image point, the image coordinates of the second image point, the second rotation matrix, the internal reference and the height.
In this implementation, a second rotation matrix of the camera rotating from a front-view state to a top-view state is determined according to the attitude angle, a third image acquired by the camera in the front-view state is converted into a fourth image, image coordinates of a first image point in the third image and image coordinates of a second image point in the fourth image are acquired, and a homography matrix of the camera to a target plane is determined according to the image coordinates of the first image point, the image coordinates of the second image point, the second rotation matrix, the internal reference and the height, so that accurate camera calibration can be achieved.
In one possible implementation, after the determining the homography matrix of the camera to the target plane, the method further includes:
acquiring image coordinates of a target object from a fifth image acquired by the camera;
converting the image coordinate of the target object to the target plane according to the homography matrix to obtain the coordinate of the target object on the target plane;
and determining a verification result of the homography matrix according to preset attribute information of the target object and the coordinate of the target object on the target plane, wherein the preset attribute information comprises preset shape information and/or preset size information of the target object.
In the implementation manner, image coordinates of a target object are acquired from a fifth image acquired by the camera, the image coordinates of the target object are converted to the target plane according to the homography matrix, coordinates of the target object on the target plane are acquired, and a verification result of the homography matrix is determined according to preset attribute information including preset shape information and/or preset size information of the target object and the coordinates of the target object on the target plane.
In a possible implementation manner, the determining a verification result of the homography matrix according to preset attribute information of the target object and coordinates of the target object on the target plane includes:
determining shape information and size information of the target object according to the coordinates of the target object on the target plane;
in response to that the shape information of the target object conforms to the preset shape information and the size information of the target object conforms to the preset size information, determining that the homography matrix is verified successfully; or, in response to that the shape information of the target object does not match the preset shape information or that the size information of the target object does not match the preset size information, determining that the homography matrix verification fails.
In this implementation, by determining the shape information and the size information of the target object according to the coordinates of the target object on the target plane, in response to that the shape information of the target object conforms to the preset shape information and that the size information of the target object conforms to the preset size information, it is determined that the homography matrix verification is successful, or in response to that the shape information of the target object does not conform to the preset shape information or that the size information of the target object does not conform to the preset size information, it is determined that the homography matrix verification fails, thereby being able to improve the accuracy of the verification result of the homography matrix.
In one possible implementation, the camera is mounted to a vehicle cabin, and the target plane is a road plane.
According to the implementation mode, calibration of the homography matrix from the camera to the road plane can be realized only by acquiring the internal reference of the camera in the cabin and the height of the camera relative to the road plane, so that the camera calibration result is more adaptive to the change of the driving environment, and the accuracy in online use is improved. In addition, by adopting the implementation mode, the problem of manual dependence in the calibration process of the vehicle cabin camera can be solved, the calibration process is automated, manual participation is not needed, and therefore the probability of error occurrence can be reduced. Moreover, the problem of dependence on artificial reference objects can be solved, the artificial reference objects do not depend on special calibration scenes any more, and the method can be suitable for more various driving environments.
According to an aspect of the present disclosure, there is provided a camera calibration apparatus including:
the first acquisition module is used for acquiring internal parameters of a camera and the height of the camera relative to a target plane;
the first determination module is used for determining three-dimensional coordinates of a plurality of space points in a camera coordinate system of the camera according to the image acquired by the camera;
the second determining module is used for determining a normal vector of the target plane according to the three-dimensional coordinates of the plurality of space points in the camera coordinate system;
a third determining module, configured to determine an attitude angle of the camera with respect to the target plane according to a normal vector of the target plane;
a fourth determining module, configured to determine a homography matrix from the camera to the target plane according to the internal parameters, the height, and the attitude angle.
In one possible implementation manner, the second determining module is configured to:
extracting a plurality of candidate planes according to the three-dimensional coordinates of the plurality of space points in the camera coordinate system;
and determining the normal vector of the target plane according to the normal vectors of the candidate planes.
In one possible implementation manner, the second determining module is configured to:
screening out potential planes of the target plane from the candidate planes according to the motion direction of the camera and normal vectors of the candidate planes;
and determining the normal vector of the target plane according to the normal vector of the potential plane.
In one possible implementation manner, the second determining module is configured to:
for any of the plurality of candidate planes, determining the candidate plane as a potential plane of the target plane in response to a normal vector of the candidate plane being approximately perpendicular to a forward motion direction vector of the camera.
In one possible implementation manner, the second determining module is configured to:
in the case that a plurality of potential planes exist, for any potential plane in the plurality of potential planes, determining the support number of the potential plane according to the normal vectors of the plurality of potential planes, wherein the support number of the potential plane represents the number of potential planes which are approximately parallel to the potential plane in the plurality of potential planes;
and determining the normal vector of the target plane according to the support numbers of the plurality of potential planes.
In one possible implementation manner, the second determining module is configured to:
and determining the normal vector of the potential plane with the maximum support number in the plurality of potential planes as the normal vector of the target plane.
In one possible implementation manner, the first determining module is configured to:
acquiring a first image and a second image acquired by the camera in a motion state;
determining a first rotation matrix and a translation vector of the camera motion from the first image and the second image;
and determining three-dimensional coordinates of a plurality of space points in a camera coordinate system of the camera according to the first rotation matrix and the translation vector.
In one possible implementation manner, the first determining module is configured to:
extracting feature points of the first image and feature points of the second image;
determining an essential matrix or a basic matrix from the first image to the second image according to the matching relation between the characteristic points of the first image and the characteristic points of the second image;
determining a first rotation matrix and a translation vector of the camera motion from the essential matrix or the fundamental matrix.
In one possible implementation manner, the fourth determining module is configured to:
determining a second rotation matrix of the camera from a front view state to a top view state according to the attitude angle;
converting a third image acquired by the camera in the forward-looking state into a fourth image, wherein the fourth image is an image looking down on the target plane;
acquiring image coordinates of a first image point in the third image and image coordinates of a second image point in the fourth image, wherein the first image point and the second image point are corresponding image points;
and determining a homography matrix of the camera to the target plane according to the image coordinates of the first image point, the image coordinates of the second image point, the second rotation matrix, the internal reference and the height.
In one possible implementation, the apparatus further includes:
the second acquisition module is used for acquiring the image coordinates of the target object from a fifth image acquired by the camera;
the conversion module is used for converting the image coordinate of the target object to the target plane according to the homography matrix to obtain the coordinate of the target object on the target plane;
a fifth determining module, configured to determine a verification result of the homography matrix according to preset attribute information of the target object and coordinates of the target object on the target plane, where the preset attribute information includes preset shape information and/or preset size information of the target object.
In one possible implementation manner, the fifth determining module is configured to:
determining shape information and size information of the target object according to the coordinates of the target object on the target plane;
in response to that the shape information of the target object conforms to the preset shape information and the size information of the target object conforms to the preset size information, determining that the homography matrix is verified successfully; or, in response to that the shape information of the target object does not match the preset shape information or that the size information of the target object does not match the preset size information, determining that the homography matrix verification fails.
In one possible implementation, the camera is mounted to a vehicle cabin, and the target plane is a road plane.
According to an aspect of the present disclosure, there is provided an electronic device including: one or more processors; a memory for storing executable instructions; wherein the one or more processors are configured to invoke the memory-stored executable instructions to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In the disclosed embodiment, by acquiring the internal parameters of the camera and the height of the camera relative to the target plane, determining three-dimensional coordinates of a plurality of spatial points in a camera coordinate system of the camera from the image acquired by the camera, determining a normal vector of the target plane according to three-dimensional coordinates of the plurality of spatial points in the camera coordinate system, determining a pose angle of the camera with respect to the target plane based on the normal vector of the target plane, and determining a homography matrix of the camera to the target plane according to the internal parameters, the height and the attitude angle, therefore, only the internal parameters of the camera and the height of the camera relative to the target plane need to be acquired, the calibration of the homography matrix from the camera to the target plane can be realized, the calibration result of the camera is more adaptive to the change of a scene, and the accuracy in online use is improved. In addition, by adopting the camera calibration method provided by the embodiment of the disclosure, the problem of manual dependence in the calibration process can be solved, the calibration process is automated, manual participation is not needed, and thus the probability of error occurrence can be reduced. Moreover, the problem of dependence of the artificial reference object can be solved, the artificial reference object does not depend on a special calibration scene any more, and the method can be applied to wider application scenes.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flowchart of a camera calibration method provided in an embodiment of the present disclosure.
Fig. 2 shows a schematic diagram of a potential plane in a camera calibration method provided by an embodiment of the present disclosure.
Fig. 3 shows a schematic diagram of a camera in a front view state and a camera in a top view state in a camera calibration method provided by an embodiment of the present disclosure.
Fig. 4 shows a schematic diagram of an image captured by a camera in a camera calibration method provided by an embodiment of the present disclosure.
Fig. 5 is a schematic diagram illustrating coordinates of a target object in a target plane in the camera calibration method provided by the embodiment of the disclosure.
Fig. 6 shows a schematic diagram of an application scenario of the camera calibration method provided by the embodiment of the present disclosure.
Fig. 7 shows a block diagram of a camera calibration apparatus provided in an embodiment of the present disclosure.
Fig. 8 illustrates a block diagram of anelectronic device 800 provided by an embodiment of the disclosure.
Fig. 9 shows a block diagram of anelectronic device 1900 provided by an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
In the related art, in an application scenario of automatic driving, calibration of a monocular camera is usually performed in a certain offline specific scenario, and then a homography matrix is not adjusted no longer no matter whether a relative relationship between a camera plane and a road plane changes or not. In the related art, when calibrating a camera, several reference point pairs are usually selected on an image plane and a road plane, and then a calibration result of a homography matrix is obtained by calculating a mapping relationship between the reference point pairs.
In the related art, due to the adoption of an offline camera calibration method, even if a homography matrix from a camera to a road plane in a specific scene can be fitted, the change of the relative relation between the camera and the road plane caused by vehicle bump and the like cannot be coped with, so that the perception accuracy of environmental information around a vehicle is low or even wrong in an application scene of automatic driving. In addition, the camera calibration method in the related art is a highly manual method, and a control point pair needs to be selected manually. For each control point on each image, the accurate corresponding point needs to be found manually from the road plane coordinate system. Because the control point is selected on the road plane under the real physical coordinate system, it is difficult to ensure that the measured coordinate of the control point under the road plane coordinate system is accurate, thereby seriously affecting the precision of camera calibration. Moreover, when the ground control point is manually selected, an artificial reference object such as a two-dimensional code or a cone-shaped barrel needs to be set, so that camera calibration needs to be performed in a special scene, calibration cannot be performed in a more general scene, and the application range is limited.
The embodiment of the disclosure provides a camera calibration method, which comprises the steps of obtaining internal reference of a camera and the height of the camera relative to a target plane, determining three-dimensional coordinates of a plurality of spatial points in a camera coordinate system of the camera from the image acquired by the camera, determining a normal vector of the target plane according to three-dimensional coordinates of the plurality of spatial points in the camera coordinate system, determining a pose angle of the camera with respect to the target plane based on the normal vector of the target plane, and determining a homography matrix of the camera to the target plane according to the internal parameters, the height and the attitude angle, therefore, only the internal parameters of the camera and the height of the camera relative to the target plane need to be acquired, the calibration of the homography matrix from the camera to the target plane can be realized, the calibration result of the camera is more adaptive to the change of a scene, and the accuracy in online use is improved. In addition, by adopting the camera calibration method provided by the embodiment of the disclosure, the problem of manual dependence in the calibration process can be solved, the calibration process is automated, manual participation is not needed, and thus the probability of error occurrence can be reduced. Moreover, the problem of dependence of the artificial reference object can be solved, the artificial reference object does not depend on a special calibration scene any more, and the method can be applied to wider application scenes.
The following describes the camera calibration method provided by the embodiments of the present disclosure in detail with reference to the accompanying drawings.
Fig. 1 shows a flowchart of a camera calibration method provided in an embodiment of the present disclosure. In a possible implementation manner, the camera calibration method may be executed by a terminal device or a server or other processing device. The terminal device may be a User Equipment (UE), a mobile device, a User terminal, a camera, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, or a wearable device. In some possible implementations, the camera calibration method may be implemented by a processor calling computer readable instructions stored in a memory. As shown in fig. 1, the camera calibration method includes steps S11 to S15.
In step S11, the internal parameters of the camera and the height of the camera relative to the target plane are acquired.
In step S12, three-dimensional coordinates of a plurality of spatial points in a camera coordinate system of the camera are determined from the image captured by the camera.
In step S13, a normal vector of the target plane is determined according to the three-dimensional coordinates of the plurality of spatial points in the camera coordinate system.
In step S14, an attitude angle of the camera with respect to the target plane is determined based on the normal vector of the target plane.
In step S15, a homography matrix of the camera to the target plane is determined according to the internal parameters, the height and the attitude angle.
In the disclosed embodiment, the pre-calibrated parameters include an internal reference of the camera and a height of the camera relative to the target plane.
In one possible implementation, the camera is mounted to a vehicle cabin, and the target plane is a road plane. According to the implementation mode, calibration of the homography matrix from the camera to the road plane can be realized only by acquiring the internal reference of the camera in the cabin and the height of the camera relative to the road plane, so that the camera calibration result is more adaptive to the change of the driving environment, and the accuracy in online use is improved. In addition, by adopting the implementation mode, the problem of manual dependence in the calibration process of the vehicle cabin camera can be solved, the calibration process is automated, manual participation is not needed, and therefore the probability of error occurrence can be reduced. Moreover, the problem of dependence on artificial reference objects can be solved, the artificial reference objects do not depend on special calibration scenes any more, and the method can be suitable for more various driving environments.
Of course, in other application scenarios, the camera may be installed at other positions, and the target plane may also be other planes, which is not limited herein.
In the embodiment of the present disclosure, based on a plurality of frames of images collected by a camera in a Motion state, a three-dimensional coordinate of a plurality of spatial points in a camera coordinate system of the camera may be obtained by using a method such as SFM (Structure From Motion). Wherein the spatial points may represent static object points in three-dimensional space. The coordinates of these spatial points in the camera coordinate system may be three-dimensional coordinates without absolute scale.
In one possible implementation, the determining three-dimensional coordinates of a plurality of spatial points in a camera coordinate system of the camera according to the image acquired by the camera includes: acquiring a first image and a second image acquired by the camera in a motion state; determining a first rotation matrix and a translation vector of the camera motion from the first image and the second image; and determining three-dimensional coordinates of a plurality of space points in a camera coordinate system of the camera according to the first rotation matrix and the translation vector. In this implementation, a first image and a second image acquired by the camera in motion at different times may be acquired. That is, the first image and the second image are images captured by the camera at different positions. According to the incidence relation between the image information of the first image and the image information of the second image, a first rotation matrix and a translation vector of the camera motion can be obtained. Wherein the first rotation matrix represents a rotation matrix of the camera motion. Based on the first rotation matrix and the translation vector of the camera motion, three-dimensional coordinates of a plurality of spatial points in a camera coordinate system of the camera can be reconstructed. In this implementation, by acquiring a first image and a second image captured by the camera in a moving state, determining a first rotation matrix and a translation vector of the camera motion according to the first image and the second image, and determining three-dimensional coordinates of a plurality of spatial points in the camera coordinate system of the camera according to the first rotation matrix and the translation vector, three-dimensional coordinates of a plurality of spatial points in the camera coordinate system of the camera can be determined quickly and accurately based on the image captured by the camera.
As one example of this implementation, the determining a first rotation matrix and a translation vector of the camera motion from the first image and the second image comprises: extracting feature points of the first image and feature points of the second image; determining an essential matrix or a basic matrix from the first image to the second image according to the matching relation between the characteristic points of the first image and the characteristic points of the second image; determining a first rotation matrix and a translation vector of the camera motion from the essential matrix or the fundamental matrix. In this example, the Feature points extracted for the first image and the second image may be FAST (Features from Accelerated segmentation Test) Feature points, Scale-Invariant Feature Transform (SIFT-Invariant Feature Transform) Feature points, speedup Robust Features (SURF-Up Robust Features) Feature points, or the like, which is not limited herein. In this example, a matching relationship between the feature points of the first image and the feature points of the second image may be determined by sparse optical flow tracking or the like.
In this example, the essential matrix or the fundamental matrix from the first image to the second image may be determined in a case where the number of matching point pairs between the first image and the second image is greater than or equal to a first preset threshold. The image may be reacquired in case the number of pairs of matching points between the first image and the second image is smaller than a first preset threshold. For example, the first preset threshold may be 5 or 10, etc.
In one example, an Essential Matrix (E Matrix) from the first image to the second image may be determined according to a matching relationship between the feature points of the first image and the feature points of the second image, and a first rotation Matrix and a translation vector of the camera motion may be determined according to the Essential Matrix. In another example, a Fundamental Matrix (F Matrix) from the first image to the second image may be determined according to a matching relationship between feature points of the first image and feature points of the second image, and a first rotation Matrix and a translation vector of the camera motion may be determined according to the Fundamental Matrix.
In this example, by extracting feature points of the first image and feature points of the second image, determining an essential matrix or a fundamental matrix from the first image to the second image according to a matching relationship between the feature points of the first image and the feature points of the second image, and determining a first rotation matrix and a translation vector of the camera motion according to the essential matrix or the fundamental matrix, the first rotation matrix and the translation vector of the camera motion can be accurately determined, and thus the accuracy of the three-dimensional coordinates of the determined spatial points in the camera coordinate system of the camera can be improved.
In one possible implementation manner, the determining a normal vector of the target plane according to three-dimensional coordinates of the plurality of spatial points in the camera coordinate system includes: extracting a plurality of candidate planes according to the three-dimensional coordinates of the plurality of space points in the camera coordinate system; and determining the normal vector of the target plane according to the normal vectors of the candidate planes. In this implementation, the candidate plane may represent an arbitrary plane extracted from the first image or the second image. Considering that at least 3 non-collinear points are required to represent a plane, in this implementation, a triangulation method may be used to extract a plurality of candidate planes from the first image or the second image according to three-dimensional coordinates of a plurality of spatial points in the camera coordinate system. For example, adjoining triangles, which may have corner points in the first image or the second image as vertices, may be divided on the image plane of the first image or the second image based on the Delaunay triangulation method, where each corner point has three-dimensional coordinates of a spatial point reconstructed based on the corner point in the camera coordinate system. For any candidate plane in the multiple candidate planes, according to the three-dimensional coordinates of the space points corresponding to the 3 vertexes of the candidate plane in the camera coordinate system, the normal vector of the candidate plane can be obtained. In this implementation, the normal vector of the candidate plane may represent a normal vector of the candidate plane in the camera coordinate system, and the normal vector of the target plane may represent a normal vector of the target plane in the camera coordinate system. In this implementation, a plurality of candidate planes are extracted according to the three-dimensional coordinates of the plurality of spatial points in the camera coordinate system, and the normal vector of the target plane is determined according to the normal vectors of the plurality of candidate planes, so that the normal vector of the target plane can be determined efficiently and accurately.
As an example of this implementation, the determining a normal vector of the target plane according to normal vectors of the candidate planes includes: screening out potential planes of the target plane from the candidate planes according to the motion direction of the camera and normal vectors of the candidate planes; and determining the normal vector of the target plane according to the normal vector of the potential plane. In this example, the potential planes may represent planes of the candidate planes having a greater probability of belonging to the target plane. The normal vector of the potential plane may represent the normal vector of the potential plane in the camera coordinate system. According to the relation between the target plane and the motion direction of the camera under different application scenes, potential planes of the target plane can be screened out from a plurality of candidate planes, and therefore most of the candidate planes can be removed. The method comprises the steps of screening out potential planes of a target plane from a plurality of candidate planes according to the motion direction of the camera and normal vectors of the candidate planes, and determining the normal vector of the target plane according to the normal vector of the potential plane, so that the accuracy of the determined normal vector of the target plane can be further improved, and the method is favorable for improving the speed of determining the normal vector of the target plane.
In one example, the screening out potential planes of the target plane from the candidate planes according to the motion direction of the camera and normal vectors of the candidate planes includes: for any of the plurality of candidate planes, determining the candidate plane as a potential plane of the target plane in response to a normal vector of the candidate plane being approximately perpendicular to a forward motion direction vector of the camera. In application scenes such as camera calibration of automatic driving and the like, a normal vector of a target plane is approximately perpendicular to the forward motion direction of a camera, so that potential planes can be screened out from candidate planes by judging whether the normal vector of the candidate planes is approximately perpendicular to the forward motion direction vector of the camera or not. The normal vector of the candidate plane is substantially perpendicular to the forward motion direction vector of the camera, and may be that an included angle between the normal vector of the candidate plane and the forward motion direction vector of the camera is less than or equal to a second preset threshold. For example, the second preset threshold may be 10 °, 8 °, 15 °, etc. In this example, for any candidate plane in the plurality of candidate planes, if the normal vector of the candidate plane is approximately perpendicular to the forward motion direction vector of the camera, the candidate plane may be determined as a potential plane of the target plane; if the normal vector of the candidate plane is not substantially perpendicular to the forward motion direction vector of the camera (for example, an included angle between the normal vector of the candidate plane and the forward motion direction vector of the camera is greater than a second preset threshold), the candidate plane may not be determined as a potential plane of the target plane, that is, the candidate plane may be rejected. In this example, by determining the candidate plane as the potential plane of the target plane in response to the normal vector of the candidate plane being substantially perpendicular to the forward motion direction vector of the camera for any one of the plurality of candidate planes, the potential plane of the target plane can be quickly and accurately screened out from a large number of candidate planes.
In one example, the determining a normal vector of the target plane according to the normal vector of the potential plane includes: in the case that a plurality of potential planes exist, for any potential plane in the plurality of potential planes, determining the support number of the potential plane according to the normal vectors of the plurality of potential planes, wherein the support number of the potential plane represents the number of potential planes which are approximately parallel to the potential plane in the plurality of potential planes; and determining the normal vector of the target plane according to the support numbers of the plurality of potential planes. For example, there are 20 potential planes, potential plane 1 topotential plane 20, respectively. For potential plane 1, if there are 6 potential planes substantially parallel to potential plane 1 among potential planes 2 to 20, the number of supports for potential plane 1 may be determined to be 6. For potential plane 12, if there are 2 potential planes substantially parallel to potential plane 12 among potential planes 1 to 11 and potential planes 13 to 20, the number of supports for potential plane 12 may be determined to be 2. The two potential planes are substantially parallel, and an included angle between normal vectors of the two potential planes may be smaller than or equal to a third preset threshold. That is, if the included angle between the normal vectors of two potential planes is less than or equal to the third preset threshold, it may be determined that the two potential planes are substantially parallel; if the included angle between the normal vectors of the two potential planes is larger than a third preset threshold value, the two potential planes can be determined to be not approximately parallel. For example, the third preset threshold may be 2 °, 3 °, 5 °, and so on. In this example, in the case that there are multiple potential planes, for any potential plane in the multiple potential planes, the number of supports of the potential plane is determined according to the normal vectors of the multiple potential planes, and the normal vector of the target plane is determined according to the number of supports of the multiple potential planes, so that the accuracy of the determined normal vector of the target plane can be improved.
For example, the determining a normal vector of the target plane according to the support numbers of the plurality of potential planes includes: and determining the normal vector of the potential plane with the maximum support number in the plurality of potential planes as the normal vector of the target plane. In this example, in order to obtain the normal vector of the target plane, it can be assumed that the planes belonging to the target plane among the potential planes occupy most. The normal vector of the potential plane with the largest support number in the plurality of potential planes is determined as the normal vector of the target plane, so that the accuracy of the determined normal vector of the target plane can be improved, namely, the accuracy of the determined target plane can be improved, and the accuracy of a camera calibration result can be improved.
Fig. 2 shows a schematic diagram of a potential plane in a camera calibration method provided by an embodiment of the present disclosure. In the example shown in fig. 2, among the plurality of potential planes, supporters ofpotential plane 5 includepotential plane 0 to potential plane 4,potential plane 6, and potential plane 7, andpotential plane 5 is the potential plane with the largest number of supporters among the plurality of potential planes, and therefore, a normal vector ofpotential plane 5 may be determined as a normal vector of the target plane. In the example shown in fig. 2, the target plane is a road plane, and as can be seen from fig. 2, the potential planes 1 to 7 are all on a real road plane.
As another example, the determining a normal vector of the target plane according to the support numbers of the plurality of potential planes includes: and determining the normal vector of any supporter of the potential plane with the largest support number in the plurality of potential planes as the normal vector of the target plane. Wherein, any support of the potential plane can represent the potential plane approximately parallel to the potential plane.
In another example, the determining the normal vector of the target plane according to the normal vector of the potential plane includes: in the case that a plurality of potential planes exist, determining a first average vector of normal vectors of the plurality of potential planes; and determining the normal vector with the smallest included angle with the first average vector in the normal vectors of the plurality of potential planes as the normal vector of the target plane. Wherein the first average vector represents an average vector of normal vectors of the plurality of potential planes.
In another example, the determining the normal vector of the target plane according to the normal vector of the potential plane includes: in the case where there are a plurality of potential planes, a first average vector of normal vectors of the plurality of potential planes is determined, and the first average vector is determined as a normal vector of the target plane.
As another example of this implementation, the determining a normal vector of the target plane according to normal vectors of the candidate planes includes: determining a second average vector of normal vectors of the plurality of candidate planes; and determining the normal vector with the smallest included angle with the second average vector in the normal vectors of the candidate planes as the normal vector of the target plane. Wherein the second average vector represents an average vector of normal vectors of the plurality of candidate planes.
As another example of this implementation, the determining a normal vector of the target plane according to normal vectors of the candidate planes includes: and determining a second average vector of the normal vectors of the candidate planes, and determining the second average vector as the normal vector of the target plane.
In the embodiment of the present disclosure, after obtaining the normal vector of the target plane, the attitude angle of the camera relative to the target plane may be determined according to the normal vector of the target plane. Wherein the pose angle may be used to represent a horizontal pose of the camera with respect to a target plane. For example, the attitude angle may include a pitch angle (pitch) and a roll angle (roll) of the camera with respect to a target plane. Wherein the roll angle of the camera relative to the target plane may represent an angle of rotation about the z-axis.
In embodiments of the present disclosure, the homography matrix of the camera to the target plane may be determined with the camera's internal parameters, the camera's height relative to the target plane, and the pose angle. The homography matrix from the camera to the target plane may represent a homography matrix from an image coordinate system corresponding to the camera to a world coordinate system corresponding to the target plane.
In one possible implementation, the determining a homography matrix of the camera to the target plane according to the internal parameters, the height, and the attitude angle includes: determining a second rotation matrix of the camera from a front view state to a top view state according to the attitude angle; converting a third image acquired by the camera in the forward-looking state into a fourth image, wherein the fourth image is an image looking down on the target plane; acquiring image coordinates of a first image point in the third image and image coordinates of a second image point in the fourth image, wherein the first image point and the second image point are corresponding image points; and determining a homography matrix of the camera to the target plane according to the image coordinates of the first image point, the image coordinates of the second image point, the second rotation matrix, the internal reference and the height. Fig. 3 shows a schematic diagram of a camera in a front view state and a camera in a top view state in a camera calibration method provided by an embodiment of the present disclosure. In fig. 3, the origin of the camera coordinate system of the camera is o, n denotes the object planeH denotes the height of the camera relative to the target plane (i.e. the distance between the origin of the camera coordinate system and the target plane), and fig. 3 also shows the y-axis and the z-axis of the camera coordinate system, RDFA second rotation matrix is represented, wherein the second rotation matrix represents a rotation matrix of the camera from a front view state to a top view state. After obtaining the attitude angle, it may be assumed that the camera is rotated from a forward looking state to a downward looking state, as shown in fig. 3. After rotation, the z-axis of the camera coordinate system of the camera is parallel to the normal vector of the target plane, and the optical center position of the camera is unchanged.
In this embodiment, the first image point represents a point in the third image, the second image point represents a point in the fourth image, the image coordinates of the first image point represent the coordinates of the first image point in the image coordinate system, and the image coordinates of the second image point represent the coordinates of the second image point in the image coordinate system. Image coordinate x of the first image pointDWith image coordinates x of the second image pointFThe correspondence between them can be expressed by equation 1:
K-1xD=λRDFK-1xFin the formula 1, the compound is shown in the specification,
wherein K represents an internal reference of the camera, RDFRepresenting a second rotation matrix and lambda represents a scale factor. Where λ is due to the fact that the depth under the camera coordinate system of the camera in the forward-looking state and the camera coordinate system of the camera in the down-looking state are both unknown. λ can be eliminated by normalizing the homogeneous coordinates.
By mixing xFDivided by K-1And then multiplying by h to obtain the coordinate of the second image point normalized to h under the camera coordinate system. The homography matrix H of the camera to the object plane can be represented by equation 2:
H=RDFK-1and (3) formula 2.
In the above implementation manner, a second rotation matrix of the camera rotated from the front-view state to the top-view state is determined according to the attitude angle, a third image acquired by the camera in the front-view state is converted into a fourth image, image coordinates of a first image point in the third image and image coordinates of a second image point in the fourth image are acquired, and a homography matrix of the camera to the target plane is determined according to the image coordinates of the first image point, the image coordinates of the second image point, the second rotation matrix, the internal reference and the height, so that accurate camera calibration can be achieved.
In one possible implementation, after the determining the homography matrix of the camera to the target plane, the method further includes: acquiring image coordinates of a target object from a fifth image acquired by the camera; converting the image coordinate of the target object to the target plane according to the homography matrix to obtain the coordinate of the target object on the target plane; and determining a verification result of the homography matrix according to preset attribute information of the target object and the coordinate of the target object on the target plane, wherein the preset attribute information comprises preset shape information and/or preset size information of the target object. In this implementation, the target object may represent any object used to validate the homography matrix. For example, in an application scenario of automatic driving, the target object may be a road sign or the like. In this implementation manner, a pre-trained neural network may be used to identify the target object in the fifth image, so as to obtain image coordinates of the target object in the fifth image. Alternatively, the fifth image may be compared with a preset image template of the target object to determine the image coordinates of the target object in the fifth image.
In this implementation, the image coordinates of the target object may be converted to the target plane according to the homography matrix, so as to obtain the coordinates of the target object on the target plane. The coordinates of the target object in the target plane may represent coordinates of the target object in a world coordinate system corresponding to the target plane. Wherein, the X axis and the Y axis of the world coordinate system corresponding to the target plane can be on the target plane, and the Z axis can be perpendicular to the target plane. As one example of this implementation, the coordinates of the target object in the target plane may include only X and Y coordinates. As another example of this implementation, the coordinates of the target object in the target plane may include X, Y, and Z coordinates.
In this implementation, the preset attribute information may represent attribute information of a preset target object. As one example of this implementation, the preset attribute information includes preset shape information of the target object, wherein the preset shape information may represent shape information of the preset target object. As another example of this implementation, the preset attribute information includes preset size information of the target object, wherein the preset size information may represent size information of the preset target object. As another example of this implementation, the preset attribute information includes preset shape information and preset size information of the target object. Of course, the preset attribute information may also include other appearance attribute information of the preset target object, which is not limited herein.
In this implementation, according to the preset shape information of the target object and the coordinates of the target object on the target plane, it may be determined whether the shape information of the target object coincides with the preset shape information; and/or judging whether the size information of the target object is consistent with the preset size information or not according to the preset size information of the target object and the coordinate of the target object on the target plane. In the implementation manner, image coordinates of a target object are acquired from a fifth image acquired by the camera, the image coordinates of the target object are converted to the target plane according to the homography matrix, coordinates of the target object on the target plane are acquired, and a verification result of the homography matrix is determined according to preset attribute information including preset shape information and/or preset size information of the target object and the coordinates of the target object on the target plane.
As an example of this implementation, the determining, according to the preset attribute information of the target object and the coordinate of the target object on the target plane, a verification result of the homography matrix includes: determining shape information and size information of the target object according to the coordinates of the target object on the target plane; in response to that the shape information of the target object conforms to the preset shape information and the size information of the target object conforms to the preset size information, determining that the homography matrix is verified successfully; or, in response to that the shape information of the target object does not match the preset shape information or that the size information of the target object does not match the preset size information, determining that the homography matrix verification fails. In this example, two conditions, namely "the shape information of the target object coincides with the preset shape information" and "the size information of the target object coincides with the preset size information" need to be satisfied for the homography matrix verification to be successful, and if any one of the two conditions is not satisfied, it is determined that the homography matrix verification fails. In this example, by determining shape information and size information of the target object according to coordinates of the target object on the target plane, in response to the shape information of the target object conforming to the preset shape information and the size information of the target object conforming to the preset size information, it is determined that the homography matrix verification is successful, or in response to the shape information of the target object not conforming to the preset shape information or the size information of the target object not conforming to the preset size information, it is determined that the homography matrix verification fails, whereby the accuracy of the verification result of the homography matrix can be improved.
Fig. 4 shows a schematic diagram of an image captured by a camera in a camera calibration method provided by an embodiment of the present disclosure. In the example shown in fig. 4, the target plane is a road plane. As shown in fig. 4, since the projection of the image is a perspective projection, it causes parallel lines in the real world to intersect at vanishing points in the image. If the accuracy of the homography matrix is high, parallel lines should continue to maintain parallelism after points in the image are transformed to the road plane through the homography matrix. In one example, the target object may be a dashed segment in the lane line. Fig. 5 is a schematic diagram illustrating coordinates of a target object in a target plane in the camera calibration method provided by the embodiment of the disclosure. As shown in fig. 5, a straight line may be extracted from the image, and after densely sampling the straight line, the dotted line segment may be mapped to the world coordinate system corresponding to the road plane, so as to obtain the X coordinate and the Y coordinate of the dotted line segment on the road plane. According to the X coordinate and the Y coordinate of the end point of the virtual line segment on the road plane, whether the left line segment and the right line segment of the virtual line segment are parallel or not can be detected. In addition, whether the size of the broken line segment is correct can be verified. For example, the distance between two parallel lines should be around 15 cm.
As another example of this implementation, the determining, according to the preset attribute information of the target object and the coordinates of the target object on the target plane, a verification result of the homography matrix includes: determining the shape information of the target object according to the coordinates of the target object on the target plane; responding to the shape information of the target object and the preset shape information to be consistent, and determining that the homography matrix is verified successfully; or, in response to the shape information of the target object not matching the preset shape information, determining that the homography matrix verification fails.
As another example of this implementation, the determining, according to the preset attribute information of the target object and the coordinates of the target object on the target plane, a verification result of the homography matrix includes: determining the size information of the target object according to the coordinate of the target object on the target plane; responding to the size information of the target object and the preset size information, and determining that the homography matrix is verified successfully; or, in response to that the size information of the target object does not match the preset size information, determining that the homography matrix verification fails.
In a possible implementation manner, the online calibration can be performed again in response to that the verification result of the homography matrix is verification failure, so that the current calibration result can be updated in real time, the homography matrix from the camera to the target plane can be more adaptive to the change of the current scene, and the accuracy in online use is improved.
The camera calibration method provided by the embodiment of the disclosure can be applied to the technical fields of unmanned driving, monocular cameras, automatic calibration and the like. For example, the camera calibration method provided by the embodiment of the disclosure may be applied to offline calibration of a homography matrix from a camera to the ground in an unmanned driving scene, and may also be applied to online calibration of a homography matrix from a camera to the ground in an unmanned driving scene.
The following describes a camera calibration method provided by the embodiments of the present disclosure through a specific application scenario. The application scenario may be an autopilot application scenario. The camera may be a monocular camera mounted in the cabin and the target plane may be a road plane. Fig. 6 shows a schematic diagram of an application scenario of the camera calibration method provided by the embodiment of the present disclosure. In the application scenario, the pre-calibrated parameters include an internal parameter K of the camera and a height h of the camera relative to the road plane.
An image I0 and an image I1 acquired by a camera can be acquired, and feature point extraction is respectively carried out on the image I0 and the image I1 to obtain feature points of the image I0 and feature points of the image I1. The matching relationship between the feature points of the image I0 and the image I1 can be determined by adopting a sparse optical flow tracking method. From the matching relationship between the feature points of the image I0 and the image I1, an essential matrix or a fundamental matrix from the image I0 to the image I1 can be determined. From the essential matrix or the fundamental matrix, a first rotation matrix and a translation vector of the camera motion may be determined. According to the first rotation matrix and the translation vector, 3D reconstruction can be carried out, and three-dimensional coordinates of a plurality of space points in a camera coordinate system of the camera are obtained.
According to the three-dimensional coordinates of the plurality of spatial points in the camera coordinate system, a plurality of candidate planes can be extracted based on a Delaunay triangulation method. For any of the plurality of candidate planes, the candidate plane may be determined to be a potential plane of a road plane in response to a normal vector of the candidate plane being approximately perpendicular to a forward motion direction vector of the camera. When a plurality of potential planes exist, the number of supports of each potential plane can be determined respectively, and the normal vector of the potential plane with the largest number of supports in the plurality of potential planes can be determined as the normal vector of the road plane.
After obtaining the normal vector of the road plane, the pitch angle and the roll angle of the camera relative to the road plane can be determined according to the normal vector of the road plane. And obtaining a second rotation matrix of the camera from the front view state to the top view state according to the pitch angle and the roll angle of the camera relative to the road plane. Equations 1 and 2 above may be used to derive a homography matrix from the camera to the road plane.
After the homography matrix from the camera to the road plane is determined, a verification result of the homography matrix can be obtained according to the shape and the size of a virtual line segment in a lane line in an image acquired by the camera, and the preset shape information and the preset size information of the virtual line segment.
According to the application scene, online automatic calibration of the homography matrix from the camera to the road plane can be realized in the application scene of automatic driving, and the current calibration result can be updated in real time, so that the change of the scene of automatic driving can be adapted to, the change of the ground can be sensed, and the accuracy in online use is improved.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides a camera calibration apparatus, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any one of the camera calibration methods provided by the present disclosure, and corresponding technical solutions and technical effects can be referred to in corresponding descriptions of the method sections, and are not described again.
Fig. 7 shows a block diagram of a camera calibration apparatus provided in an embodiment of the present disclosure. As shown in fig. 7, the camera calibration apparatus includes:
afirst acquisition module 71, configured to acquire an internal reference of a camera and a height of the camera relative to a target plane;
a first determiningmodule 72, configured to determine three-dimensional coordinates of a plurality of spatial points in a camera coordinate system of the camera according to the image acquired by the camera;
a second determiningmodule 73, configured to determine a normal vector of the target plane according to three-dimensional coordinates of the multiple spatial points in the camera coordinate system;
a third determiningmodule 74, configured to determine an attitude angle of the camera with respect to the target plane according to a normal vector of the target plane;
a fourth determiningmodule 75, configured to determine a homography matrix from the camera to the target plane according to the internal parameters, the height, and the attitude angle.
In a possible implementation manner, the second determiningmodule 73 is configured to:
extracting a plurality of candidate planes according to the three-dimensional coordinates of the plurality of space points in the camera coordinate system;
and determining the normal vector of the target plane according to the normal vectors of the candidate planes.
In a possible implementation manner, the second determiningmodule 73 is configured to:
screening out potential planes of the target plane from the candidate planes according to the motion direction of the camera and normal vectors of the candidate planes;
and determining the normal vector of the target plane according to the normal vector of the potential plane.
In a possible implementation manner, the second determiningmodule 73 is configured to:
for any of the plurality of candidate planes, determining the candidate plane as a potential plane of the target plane in response to a normal vector of the candidate plane being approximately perpendicular to a forward motion direction vector of the camera.
In a possible implementation manner, the second determiningmodule 73 is configured to:
in the case that a plurality of potential planes exist, for any potential plane in the plurality of potential planes, determining the support number of the potential plane according to the normal vectors of the plurality of potential planes, wherein the support number of the potential plane represents the number of potential planes which are approximately parallel to the potential plane in the plurality of potential planes;
and determining the normal vector of the target plane according to the support numbers of the plurality of potential planes.
In a possible implementation manner, the second determiningmodule 73 is configured to:
and determining the normal vector of the potential plane with the maximum support number in the plurality of potential planes as the normal vector of the target plane.
In one possible implementation manner, the first determiningmodule 72 is configured to:
acquiring a first image and a second image acquired by the camera in a motion state;
determining a first rotation matrix and a translation vector of the camera motion from the first image and the second image;
and determining three-dimensional coordinates of a plurality of space points in a camera coordinate system of the camera according to the first rotation matrix and the translation vector.
In one possible implementation manner, the first determiningmodule 72 is configured to:
extracting feature points of the first image and feature points of the second image;
determining an essential matrix or a basic matrix from the first image to the second image according to the matching relation between the characteristic points of the first image and the characteristic points of the second image;
determining a first rotation matrix and a translation vector of the camera motion from the essential matrix or the fundamental matrix.
In a possible implementation manner, the fourth determiningmodule 75 is configured to:
determining a second rotation matrix of the camera from a front view state to a top view state according to the attitude angle;
converting a third image acquired by the camera in the forward-looking state into a fourth image, wherein the fourth image is an image looking down on the target plane;
acquiring image coordinates of a first image point in the third image and image coordinates of a second image point in the fourth image, wherein the first image point and the second image point are corresponding image points;
and determining a homography matrix of the camera to the target plane according to the image coordinates of the first image point, the image coordinates of the second image point, the second rotation matrix, the internal reference and the height.
In one possible implementation, the apparatus further includes:
the second acquisition module is used for acquiring the image coordinates of the target object from a fifth image acquired by the camera;
the conversion module is used for converting the image coordinate of the target object to the target plane according to the homography matrix to obtain the coordinate of the target object on the target plane;
a fifth determining module, configured to determine a verification result of the homography matrix according to preset attribute information of the target object and coordinates of the target object on the target plane, where the preset attribute information includes preset shape information and/or preset size information of the target object.
In one possible implementation manner, the fifth determining module is configured to:
determining shape information and size information of the target object according to the coordinates of the target object on the target plane;
in response to that the shape information of the target object conforms to the preset shape information and the size information of the target object conforms to the preset size information, determining that the homography matrix is verified successfully; or, in response to that the shape information of the target object does not match the preset shape information or that the size information of the target object does not match the preset size information, determining that the homography matrix verification fails.
In one possible implementation, the camera is mounted to a vehicle cabin, and the target plane is a road plane.
In the disclosed embodiment, by acquiring the internal parameters of the camera and the height of the camera relative to the target plane, determining three-dimensional coordinates of a plurality of spatial points in a camera coordinate system of the camera from the image acquired by the camera, determining a normal vector of the target plane according to three-dimensional coordinates of the plurality of spatial points in the camera coordinate system, determining a pose angle of the camera with respect to the target plane based on the normal vector of the target plane, and determining a homography matrix of the camera to the target plane according to the internal parameters, the height and the attitude angle, therefore, only the internal parameters of the camera and the height of the camera relative to the target plane need to be acquired, the calibration of the homography matrix from the camera to the target plane can be realized, the calibration result of the camera is more adaptive to the change of a scene, and the accuracy in online use is improved. In addition, by adopting the camera calibration device provided by the embodiment of the disclosure, the problem of manual dependence in the calibration process can be solved, the calibration process is automated, manual participation is not needed, and thus the probability of error occurrence can be reduced. Moreover, the problem of dependence of the artificial reference object can be solved, the artificial reference object does not depend on a special calibration scene any more, and the method can be applied to wider application scenes.
In some embodiments, functions or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementations and technical effects thereof may refer to the description of the above method embodiments, which are not described herein again for brevity.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-described method. The computer-readable storage medium may be a non-volatile computer-readable storage medium, or may be a volatile computer-readable storage medium.
Embodiments of the present disclosure also provide a computer program, which includes computer readable code, and when the computer readable code runs in an electronic device, a processor in the electronic device executes the above method.
The disclosed embodiments also provide a computer program product comprising computer readable code or a non-volatile computer readable storage medium carrying computer readable code, which when run in an electronic device, a processor in the electronic device performs the above method.
An embodiment of the present disclosure further provides an electronic device, including: one or more processors; a memory for storing executable instructions; wherein the one or more processors are configured to invoke the memory-stored executable instructions to perform the above-described method.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 8 illustrates a block diagram of anelectronic device 800 provided by an embodiment of the disclosure. For example, theelectronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 8,electronic device 800 may include one or more of the following components: processingcomponent 802,memory 804,power component 806,multimedia component 808,audio component 810, input/output (I/O)interface 812,sensor component 814, andcommunication component 816.
Theprocessing component 802 generally controls overall operation of theelectronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. Theprocessing components 802 may include one ormore processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, theprocessing component 802 can include one or more modules that facilitate interaction between theprocessing component 802 and other components. For example, theprocessing component 802 can include a multimedia module to facilitate interaction between themultimedia component 808 and theprocessing component 802.
Thememory 804 is configured to store various types of data to support operations at theelectronic device 800. Examples of such data include instructions for any application or method operating on theelectronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. Thememory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Thepower supply component 806 provides power to the various components of theelectronic device 800. Thepower components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for theelectronic device 800.
Themultimedia component 808 includes a screen that provides an output interface between theelectronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, themultimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when theelectronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
Theaudio component 810 is configured to output and/or input audio signals. For example, theaudio component 810 includes a Microphone (MIC) configured to receive external audio signals when theelectronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in thememory 804 or transmitted via thecommunication component 816. In some embodiments,audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between theprocessing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
Thesensor assembly 814 includes one or more sensors for providing various aspects of state assessment for theelectronic device 800. For example, thesensor assembly 814 may detect an open/closed state of theelectronic device 800, the relative positioning of components, such as a display and keypad of theelectronic device 800, thesensor assembly 814 may also detect a change in the position of theelectronic device 800 or a component of theelectronic device 800, the presence or absence of user contact with theelectronic device 800, orientation or acceleration/deceleration of theelectronic device 800, and a change in the temperature of theelectronic device 800.Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. Thesensor assembly 814 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, thesensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
Thecommunication component 816 is configured to facilitate wired or wireless communication between theelectronic device 800 and other devices. Theelectronic device 800 may access a wireless network based on a communication standard, such as a wireless network (Wi-Fi), a second generation mobile communication technology (2G), a third generation mobile communication technology (3G), a fourth generation mobile communication technology (4G), a long term evolution of universal mobile communication technology (LTE), a fifth generation mobile communication technology (5G), or a combination thereof. In an exemplary embodiment, thecommunication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, thecommunication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, theelectronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as thememory 804, is also provided that includes computer program instructions executable by theprocessor 820 of theelectronic device 800 to perform the above-described methods.
Fig. 9 shows a block diagram of anelectronic device 1900 provided by an embodiment of the disclosure. For example, theelectronic device 1900 may be provided as a server. Referring to fig. 9,electronic device 1900 includes aprocessing component 1922 further including one or more processors and memory resources, represented bymemory 1932, for storing instructions, e.g., applications, executable byprocessing component 1922. The application programs stored inmemory 1932 may include one or more modules that each correspond to a set of instructions. Further, theprocessing component 1922 is configured to execute instructions to perform the above-described method.
Theelectronic device 1900 may also include apower component 1926 configured to perform power management of theelectronic device 1900, a wired orwireless network interface 1950 configured to connect theelectronic device 1900 to a network, and an input/output (I/O)interface 1958. Theelectronic device 1900 may operate based on an operating system, such as the Microsoft Server operating system (Windows Server), stored in the memory 1932TM) Apple Inc. of the present application based on the graphic user interface operating System (Mac OS X)TM) Multi-user, multi-process computer operating system (U)nixTM) Free and open native code Unix-like operating System (Linux)TM) Open native code Unix-like operating System (FreeBSD)TM) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as thememory 1932, is also provided that includes computer program instructions executable by theprocessing component 1922 of theelectronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (15)

Translated fromChinese
1.一种相机标定方法,其特征在于,包括:1. a camera calibration method, is characterized in that, comprises:获取相机的内参和所述相机相对于目标平面的高度;Obtain the internal parameters of the camera and the height of the camera relative to the target plane;根据所述相机采集的图像,确定多个空间点在所述相机的相机坐标系下的三维坐标;determining the three-dimensional coordinates of a plurality of spatial points in the camera coordinate system of the camera according to the images collected by the camera;根据所述多个空间点在所述相机坐标系下的三维坐标,确定所述目标平面的法向量;Determine the normal vector of the target plane according to the three-dimensional coordinates of the plurality of spatial points in the camera coordinate system;根据所述目标平面的法向量,确定所述相机相对于所述目标平面的姿态角;Determine the attitude angle of the camera relative to the target plane according to the normal vector of the target plane;根据所述内参、所述高度和所述姿态角,确定所述相机到所述目标平面的单应性矩阵。According to the internal reference, the height and the attitude angle, a homography matrix from the camera to the target plane is determined.2.根据权利要求1所述的方法,其特征在于,所述根据所述多个空间点在所述相机坐标系下的三维坐标,确定所述目标平面的法向量,包括:2 . The method according to claim 1 , wherein, determining the normal vector of the target plane according to the three-dimensional coordinates of the multiple spatial points in the camera coordinate system, comprising: 3 .根据所述多个空间点在所述相机坐标系下的三维坐标,提取多个候选平面;extracting multiple candidate planes according to the three-dimensional coordinates of the multiple spatial points in the camera coordinate system;根据所述多个候选平面的法向量,确定所述目标平面的法向量。The normal vector of the target plane is determined according to the normal vectors of the plurality of candidate planes.3.根据权利要求2所述的方法,其特征在于,所述根据所述多个候选平面的法向量,确定所述目标平面的法向量,包括:3. The method according to claim 2, wherein the determining the normal vector of the target plane according to the normal vectors of the multiple candidate planes comprises:根据所述相机的运动方向以及所述多个候选平面的法向量,从所述多个候选平面中,筛选出所述目标平面的潜在平面;According to the moving direction of the camera and the normal vectors of the multiple candidate planes, from the multiple candidate planes, screen out the potential plane of the target plane;根据所述潜在平面的法向量,确定所述目标平面的法向量。The normal vector of the target plane is determined according to the normal vector of the potential plane.4.根据权利要求3所述的方法,其特征在于,所述根据所述相机的运动方向以及所述多个候选平面的法向量,从所述多个候选平面中,筛选出所述目标平面的潜在平面,包括:4 . The method according to claim 3 , wherein the target plane is selected from the plurality of candidate planes according to the moving direction of the camera and the normal vector of the plurality of candidate planes. 5 . potential planes, including:对于所述多个候选平面中的任一候选平面,响应于所述候选平面的法向量与所述相机的前向运动方向向量大致垂直,将所述候选平面确定为所述目标平面的潜在平面。For any candidate plane among the plurality of candidate planes, in response to the normal vector of the candidate plane being substantially perpendicular to the forward motion direction vector of the camera, the candidate plane is determined as a potential plane of the target plane .5.根据权利要求3或4所述的方法,其特征在于,所述根据所述潜在平面的法向量,确定所述目标平面的法向量,包括:5. The method according to claim 3 or 4, wherein the determining the normal vector of the target plane according to the normal vector of the potential plane comprises:在存在多个潜在平面的情况下,对于所述多个潜在平面中的任一所述潜在平面,根据所述多个潜在平面的法向量,确定所述潜在平面的支持数,其中,所述潜在平面的支持数表示所述多个潜在平面中、与所述潜在平面大致平行的潜在平面的数量;When there are multiple potential planes, for any one of the multiple potential planes, the support number of the potential plane is determined according to the normal vector of the multiple potential planes, wherein the The number of supports of a potential plane represents the number of potential planes that are substantially parallel to the potential plane among the plurality of potential planes;根据所述多个潜在平面的支持数,确定所述目标平面的法向量。A normal vector of the target plane is determined according to the support numbers of the plurality of potential planes.6.根据权利要求5所述的方法,其特征在于,所述根据所述多个潜在平面的支持数,确定所述目标平面的法向量,包括:6. The method according to claim 5, wherein the determining the normal vector of the target plane according to the support numbers of the multiple potential planes comprises:将所述多个潜在平面中、支持数最大的潜在平面的法向量,确定为所述目标平面的法向量。The normal vector of the potential plane with the largest number of supports among the multiple potential planes is determined as the normal vector of the target plane.7.根据权利要求1至6中任意一项所述的方法,其特征在于,所述根据所述相机采集的图像,确定多个空间点在所述相机的相机坐标系下的三维坐标,包括:7. The method according to any one of claims 1 to 6, wherein the determining the three-dimensional coordinates of a plurality of spatial points in the camera coordinate system of the camera according to the images collected by the camera, comprising: :获取处于运动状态的所述相机采集的第一图像和第二图像;acquiring a first image and a second image captured by the camera in a motion state;根据所述第一图像和所述第二图像,确定所述相机运动的第一旋转矩阵和平移向量;determining a first rotation matrix and a translation vector of the camera motion according to the first image and the second image;根据所述第一旋转矩阵和所述平移向量,确定多个空间点在所述相机的相机坐标系下的三维坐标。According to the first rotation matrix and the translation vector, three-dimensional coordinates of a plurality of spatial points in the camera coordinate system of the camera are determined.8.根据权利要求7所述的方法,其特征在于,所述根据所述第一图像和所述第二图像,确定所述相机运动的第一旋转矩阵和平移向量,包括:8 . The method according to claim 7 , wherein the determining the first rotation matrix and translation vector of the camera motion according to the first image and the second image comprises: 8 .提取所述第一图像的特征点和所述第二图像的特征点;extracting feature points of the first image and feature points of the second image;根据所述第一图像的特征点与所述第二图像的特征点之间的匹配关系,确定从所述第一图像至所述第二图像的本质矩阵或者基本矩阵;According to the matching relationship between the feature points of the first image and the feature points of the second image, determine an essential matrix or an essential matrix from the first image to the second image;根据所述本质矩阵或者所述基本矩阵,确定所述相机运动的第一旋转矩阵和平移向量。According to the essential matrix or the fundamental matrix, a first rotation matrix and a translation vector of the camera motion are determined.9.根据权利要求1至8中任意一项所述的方法,其特征在于,所述根据所述内参、所述高度和所述姿态角,确定所述相机到所述目标平面的单应性矩阵,包括:9. The method according to any one of claims 1 to 8, wherein the homography of the camera to the target plane is determined according to the internal reference, the height and the attitude angle matrix, including:根据所述姿态角,确定所述相机从前视状态旋转至俯视状态的第二旋转矩阵;According to the attitude angle, determine a second rotation matrix of the camera rotated from the front-view state to the top-view state;将所述相机在所述前视状态下采集的第三图像转化为第四图像,其中,所述第四图像为俯视所述目标平面的图像;Converting the third image collected by the camera in the forward-looking state into a fourth image, wherein the fourth image is an image looking down on the target plane;获取所述第三图像中的第一图像点的图像坐标和所述第四图像中的第二图像点的图像坐标,其中,所述第一图像点和所述第二图像点为相应的图像点;Obtain the image coordinates of the first image point in the third image and the image coordinates of the second image point in the fourth image, wherein the first image point and the second image point are corresponding images point;根据所述第一图像点的图像坐标、所述第二图像点的图像坐标、所述第二旋转矩阵、所述内参和所述高度,确定所述相机到所述目标平面的单应性矩阵。A homography matrix from the camera to the target plane is determined according to the image coordinates of the first image point, the image coordinates of the second image point, the second rotation matrix, the internal parameter and the height .10.根据权利要求1至9中任意一项所述的方法,其特征在于,在所述确定所述相机到所述目标平面的单应性矩阵之后,所述方法还包括:10. The method according to any one of claims 1 to 9, wherein after the determining the homography matrix from the camera to the target plane, the method further comprises:从所述相机采集的第五图像中,获取目标对象的图像坐标;obtaining the image coordinates of the target object from the fifth image collected by the camera;根据所述单应性矩阵,将所述目标对象的图像坐标转换至所述目标平面,得到所述目标对象在所述目标平面的坐标;According to the homography matrix, transform the image coordinates of the target object to the target plane to obtain the coordinates of the target object on the target plane;根据所述目标对象的预设属性信息,以及所述目标对象在所述目标平面的坐标,确定所述单应性矩阵的验证结果,其中,所述预设属性信息包括所述目标对象的预设形状信息和/或预设尺寸信息。The verification result of the homography matrix is determined according to the preset attribute information of the target object and the coordinates of the target object on the target plane, wherein the preset attribute information includes the preset attribute information of the target object. Set shape information and/or preset size information.11.根据权利要求10所述的方法,其特征在于,所述根据所述目标对象的预设属性信息,以及所述目标对象在所述目标平面的坐标,确定所述单应性矩阵的验证结果,包括:11 . The method according to claim 10 , wherein the verification of the homography matrix is determined according to the preset attribute information of the target object and the coordinates of the target object on the target plane. 12 . Results, including:根据所述目标对象在所述目标平面的坐标,确定所述目标对象的形状信息和尺寸信息;Determine the shape information and size information of the target object according to the coordinates of the target object on the target plane;响应于所述目标对象的形状信息与所述预设形状信息相符,且所述目标对象的尺寸信息与所述预设尺寸信息相符,确定所述单应性矩阵验证成功;或者,响应于所述目标对象的形状信息与所述预设形状信息不符,或者所述目标对象的尺寸信息与所述预设尺寸信息不符,确定所述单应性矩阵验证失败。In response to the shape information of the target object being consistent with the preset shape information, and the size information of the target object being consistent with the preset size information, it is determined that the homography matrix verification is successful; or, in response to the If the shape information of the target object does not match the preset shape information, or the size information of the target object does not match the preset size information, it is determined that the homography matrix verification fails.12.根据权利要求1至11中任意一项所述的方法,其特征在于,所述相机安装于车舱,所述目标平面为路平面。12. The method according to any one of claims 1 to 11, wherein the camera is installed in a vehicle cabin, and the target plane is a road plane.13.一种相机标定装置,其特征在于,包括:13. A camera calibration device, comprising:第一获取模块,用于获取相机的内参和所述相机相对于目标平面的高度;The first acquisition module is used to acquire the internal parameters of the camera and the height of the camera relative to the target plane;第一确定模块,用于根据所述相机采集的图像,确定多个空间点在所述相机的相机坐标系下的三维坐标;a first determining module, configured to determine the three-dimensional coordinates of a plurality of spatial points in the camera coordinate system of the camera according to the images collected by the camera;第二确定模块,用于根据所述多个空间点在所述相机坐标系下的三维坐标,确定所述目标平面的法向量;a second determining module, configured to determine the normal vector of the target plane according to the three-dimensional coordinates of the plurality of spatial points in the camera coordinate system;第三确定模块,用于根据所述目标平面的法向量,确定所述相机相对于所述目标平面的姿态角;a third determining module, configured to determine the attitude angle of the camera relative to the target plane according to the normal vector of the target plane;第四确定模块,用于根据所述内参、所述高度和所述姿态角,确定所述相机到所述目标平面的单应性矩阵。The fourth determination module is configured to determine the homography matrix from the camera to the target plane according to the internal reference, the height and the attitude angle.14.一种电子设备,其特征在于,包括:14. An electronic device, characterized in that, comprising:一个或多个处理器;one or more processors;用于存储可执行指令的存储器;memory for storing executable instructions;其中,所述一个或多个处理器被配置为调用所述存储器存储的可执行指令,以执行权利要求1至12中任意一项所述的方法。wherein the one or more processors are configured to invoke executable instructions stored in the memory to perform the method of any one of claims 1-12.15.一种计算机可读存储介质,其上存储有计算机程序指令,其特征在于,所述计算机程序指令被处理器执行时实现权利要求1至12中任意一项所述的方法。15. A computer-readable storage medium having computer program instructions stored thereon, wherein the computer program instructions implement the method according to any one of claims 1 to 12 when the computer program instructions are executed by a processor.
CN202111011090.0A2021-08-312021-08-31 Camera calibration method and device, electronic device and storage mediumActiveCN113808216B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202111011090.0ACN113808216B (en)2021-08-312021-08-31 Camera calibration method and device, electronic device and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202111011090.0ACN113808216B (en)2021-08-312021-08-31 Camera calibration method and device, electronic device and storage medium

Publications (2)

Publication NumberPublication Date
CN113808216Atrue CN113808216A (en)2021-12-17
CN113808216B CN113808216B (en)2024-12-10

Family

ID=78942122

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202111011090.0AActiveCN113808216B (en)2021-08-312021-08-31 Camera calibration method and device, electronic device and storage medium

Country Status (1)

CountryLink
CN (1)CN113808216B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114241061A (en)*2021-12-242022-03-25上海柏楚电子科技股份有限公司Calibration method, calibration system and calibration target for line structured light imaging and measurement system using calibration target
CN116168087A (en)*2023-01-092023-05-26智道网联科技(北京)有限公司 Method, device, and electronic equipment for verifying roadside camera calibration results

Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
DE102005007243A1 (en)*2005-02-172006-08-31Krackhardt, Ulrich, Dr.Reflective surface three dimensional coordinate measuring sensor, has lighting and observation units, where lighting unit producing sharpness depth, laterally structured light sample and observation unit has photo sensitive surface detector
CN102679945A (en)*2012-06-052012-09-19哈尔滨工业大学Satellite pointing and attitude measuring method and device based on three-point reflecting cooperation
CN107945234A (en)*2016-10-122018-04-20杭州海康威视数字技术股份有限公司A kind of definite method and device of stereo camera external parameter
CN109448105A (en)*2018-10-152019-03-08山东大学Three-dimensional human skeleton generation method and system based on more depth image sensors
CN109697734A (en)*2018-12-252019-04-30浙江商汤科技开发有限公司Position and orientation estimation method and device, electronic equipment and storage medium
CN110310338A (en)*2019-06-242019-10-08西北工业大学 A Light Field Camera Calibration Method Based on Multicenter Projection Model
US20190371044A1 (en)*2018-06-042019-12-05Baidu Online Network Technology (Beijing) Co., LtdMethod, apparatus, device and computer readable storage medium for reconstructing three-dimensional scene
CN112991465A (en)*2021-03-262021-06-18禾多科技(北京)有限公司Camera calibration method and device, electronic equipment and computer readable medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
DE102005007243A1 (en)*2005-02-172006-08-31Krackhardt, Ulrich, Dr.Reflective surface three dimensional coordinate measuring sensor, has lighting and observation units, where lighting unit producing sharpness depth, laterally structured light sample and observation unit has photo sensitive surface detector
CN102679945A (en)*2012-06-052012-09-19哈尔滨工业大学Satellite pointing and attitude measuring method and device based on three-point reflecting cooperation
CN107945234A (en)*2016-10-122018-04-20杭州海康威视数字技术股份有限公司A kind of definite method and device of stereo camera external parameter
US20190371044A1 (en)*2018-06-042019-12-05Baidu Online Network Technology (Beijing) Co., LtdMethod, apparatus, device and computer readable storage medium for reconstructing three-dimensional scene
CN109448105A (en)*2018-10-152019-03-08山东大学Three-dimensional human skeleton generation method and system based on more depth image sensors
CN109697734A (en)*2018-12-252019-04-30浙江商汤科技开发有限公司Position and orientation estimation method and device, electronic equipment and storage medium
CN110310338A (en)*2019-06-242019-10-08西北工业大学 A Light Field Camera Calibration Method Based on Multicenter Projection Model
CN112991465A (en)*2021-03-262021-06-18禾多科技(北京)有限公司Camera calibration method and device, electronic equipment and computer readable medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
崔彦平;林玉池;黄银国;: "回转体目标空间三维姿态测量方法研究", 传感技术学报, no. 01, 28 February 2007 (2007-02-28)*
张楠;孙剑峰;姜鹏;刘迪;王鹏辉;: "激光雷达场景三维姿态点法向量估计方法", 红外与激光工程, no. 01, 25 January 2020 (2020-01-25)*
方志强;肖书浩;熊禾根;李公法;: "获取目标姿态的图像匹配方法研究", 制造业自动化, no. 04, 25 April 2019 (2019-04-25)*
游江;唐力伟;邓士杰;: "机器视觉空间目标姿态自动测量方法研究", 中国测试, no. 11, 30 November 2016 (2016-11-30)*

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114241061A (en)*2021-12-242022-03-25上海柏楚电子科技股份有限公司Calibration method, calibration system and calibration target for line structured light imaging and measurement system using calibration target
CN116168087A (en)*2023-01-092023-05-26智道网联科技(北京)有限公司 Method, device, and electronic equipment for verifying roadside camera calibration results

Also Published As

Publication numberPublication date
CN113808216B (en)2024-12-10

Similar Documents

PublicationPublication DateTitle
US11270460B2 (en)Method and apparatus for determining pose of image capturing device, and storage medium
CN111983635B (en) Pose determination method and device, electronic device and storage medium
EP3825960B1 (en)Method and device for obtaining localization information
CN109584362B (en)Three-dimensional model construction method and device, electronic equipment and storage medium
EP4026092B1 (en)Scene lock mode for capturing camera images
CN114019473A (en) Object detection method and device, electronic device and storage medium
CN110473259A (en)Pose determines method and device, electronic equipment and storage medium
CN112945207B (en)Target positioning method and device, electronic equipment and storage medium
CN110503689A (en) Attitude prediction method, model training method and device
CN111401230B (en)Gesture estimation method and device, electronic equipment and storage medium
WO2023103377A1 (en)Calibration method and apparatus, electronic device, storage medium, and computer program product
CN113052919A (en)Calibration method and device of visual sensor, electronic equipment and storage medium
CN114519794A (en)Feature point matching method and device, electronic equipment and storage medium
CN112148815A (en)Positioning method and device based on shared map, electronic equipment and storage medium
CN112184787A (en)Image registration method and device, electronic equipment and storage medium
CN113808216B (en) Camera calibration method and device, electronic device and storage medium
CN114581525A (en) Attitude determination method and device, electronic device and storage medium
CN111860373B (en)Target detection method and device, electronic equipment and storage medium
CN111325786B (en)Image processing method and device, electronic equipment and storage medium
CN112837372A (en)Data generation method and device, electronic equipment and storage medium
CN113192145B (en)Equipment calibration method and device, electronic equipment and storage medium
CN112767541B (en)Three-dimensional reconstruction method and device, electronic equipment and storage medium
CN109543544B (en)Cross-spectrum image matching method and device, electronic equipment and storage medium
CN114898074B (en) Three-dimensional information determination method, device, electronic device and storage medium
HK40045348A (en)Target positioning method and device, electronic equipment and storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp