Movatterモバイル変換


[0]ホーム

URL:


CN114754743A - Target positioning method for carrying multiple PTZ cameras on intelligent ground unmanned platform - Google Patents

Target positioning method for carrying multiple PTZ cameras on intelligent ground unmanned platform
Download PDF

Info

Publication number
CN114754743A
CN114754743ACN202210403706.7ACN202210403706ACN114754743ACN 114754743 ACN114754743 ACN 114754743ACN 202210403706 ACN202210403706 ACN 202210403706ACN 114754743 ACN114754743 ACN 114754743A
Authority
CN
China
Prior art keywords
coordinate system
target
camera
vehicle body
ptz
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210403706.7A
Other languages
Chinese (zh)
Other versions
CN114754743B (en
Inventor
徐友春
冒康
娄静涛
朱愿
李永乐
袁振寰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Military Transportation Research Institute Of Chinese People's Liberation Army Army Military Transportation Academy
Original Assignee
Military Transportation Research Institute Of Chinese People's Liberation Army Army Military Transportation Academy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Military Transportation Research Institute Of Chinese People's Liberation Army Army Military Transportation AcademyfiledCriticalMilitary Transportation Research Institute Of Chinese People's Liberation Army Army Military Transportation Academy
Priority to CN202210403706.7ApriorityCriticalpatent/CN114754743B/en
Publication of CN114754743ApublicationCriticalpatent/CN114754743A/en
Application grantedgrantedCritical
Publication of CN114754743BpublicationCriticalpatent/CN114754743B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The invention relates to a target positioning method for carrying a plurality of PTZ cameras on a ground intelligent unmanned platform, wherein the PTZ cameras and a vehicle-mounted computing module which are arranged on the ground intelligent unmanned platform form a sensing system, the sensing system sends images acquired by the PTZ cameras and self pose and focal length information to the vehicle-mounted computing module, and meanwhile, the vehicle-mounted computing module can send instructions to the cameras to control the cameras to rotate and zoom, and an internal parameter matrix K of each PTZ camera is calculatediAnd its rotation matrix R relative to the vehicle body coordinate systemiTranslation vector TiCalculating a transformation matrix H for transforming the target to be positioned from the world coordinate system to the pixel coordinate system of each cameraiAnd solving the three-dimensional position of the target in the vehicle body coordinate system by using a least square method. Has the advantages that: the ground intelligent unmanned platform can meet the requirements of reconnaissance and accurate positioning of targets with different angle ranges and distances under various environmentsTo perform accurate three-dimensional positioning of the target.

Description

Target positioning method for carrying multiple PTZ cameras on intelligent ground unmanned platform
Technical Field
The invention belongs to the technical field of vehicle-mounted sensing systems, and particularly relates to a target positioning method for carrying a plurality of PTZ cameras on a ground intelligent unmanned platform.
Background
With the rapid development of information technology, the capacity of an intelligent unmanned system for carrying out specific tasks is greatly leaped, the development of the intelligent unmanned system has great significance in both civil and military aspects, the intelligent unmanned system in the civil aspect can reduce the workload of people, has more advantages than people in some fields, and can improve the production work efficiency; in the military aspect, the ground intelligent unmanned platform can assist or even replace soldiers to complete battlefield maneuvering patrol, strange scene investigation, logistics material supply, combat fire attack, hardness and other tasks, so that the burden of fighters is reduced, and casualties are reduced.
The intelligent unmanned system mainly comprises four modules of environment perception, positioning navigation, planning decision and intelligent control, wherein the perception module is a main information input source of the intelligent unmanned system, the output result is the basis of the planning decision of the intelligent unmanned system, the tracking and positioning of the target are one of the main functions of the perception module, and the tracking and positioning of the target based on the image is a hotspot of research in the field of computer vision.
With the development of sensing technology, target tracking and positioning can be realized by various means such as laser radar, camera, millimeter wave radar and the like, and various sensors have the following characteristics in the aspect of ranging: (1) the laser radar is an active distance measuring sensor, can acquire centimeter-level three-dimensional information of each laser irradiation point, and can realize centimeter-level accurate positioning on a detected and tracked target through processing; the laser radar has small interference by illumination, can normally work day and night, but has high price, limited laser scanning points and insufficient information density, and particularly, as the target distance is farther and farther, the laser scanning point cloud is more and more sparse, the resolution ratio is lower and lower, the texture characteristic is seriously lost, and the perception effect is sharply reduced; (2) monocular cameras are low in cost and can provide rich texture information, and are the main means of current target identification. The monocular camera is limited by the fact that the target depth information cannot be directly obtained through a small hole imaging principle, in actual use, the depth of the target is usually estimated through a depth learning method based on the prior information of the target, but the method is large in error. The camera is used as a passive sensor and is easily influenced by ambient environments such as illumination and the like; (3) the common binocular camera can realize stereoscopic vision through stereoscopic matching, but the positioning accuracy is limited by the length of a camera base line, and high-accuracy positioning and ranging can be obtained only in a smaller range; because the pose of the common binocular camera cannot be changed and the focal length of the lens cannot be adjusted, the sensing range cannot be changed, and the high-precision distance measurement range is very small; (4) the millimeter wave radar is an active sensor and has strong detection capability on azimuth angle, distance and speed of objects within the range of 200 meters. It features strong penetrability to smoke, dust, rain and snow, and can adapt to various bad weather without being influenced by illumination variation factor. However, millimeter wave radar has many data noise points and low angular resolution and ranging accuracy, and cannot meet the high-accuracy target positioning requirement required by an intelligent unmanned system. In summary, no sensor can simultaneously meet the requirements of low cost, high precision and large range in target detection, tracking and positioning equipment adopted by the current intelligent unmanned system.
The PTZ camera is also called a pan-tilt camera, and can realize the omnibearing rotation of the camera and the zooming of a lens. The PTZ camera can acquire image information of different areas through rotation, can acquire space image information of different depths through zooming, does not reduce image resolution, can keep high-precision resolution in a large range, and adapts to the requirements of an intelligent unmanned system. The double PTZ cameras simulate the chameleon vision system, the left eye and the right eye can rotate independently, when the surrounding environment needs to be sensed, the left camera and the right camera respectively rotate to realize large-angle reconnaissance, when the target needs to be positioned, the left camera and the right camera point to the same direction to form stereoscopic vision, and the three-dimensional positioning of the target is realized. Based on the characteristics, the double PTZ cameras can form stereoscopic vision at any angle and focal length, and high-precision three-dimensional positioning of a remote target and high-quality three-dimensional reconstruction of a scene are realized. Compared with a laser radar, the PTZ camera is low in price, and large-range and high-precision target detection, tracking and positioning can be achieved through focal length adjustment and pose adjustment of the camera. Therefore, the PTZ camera can be applied to a ground intelligent unmanned platform to replace expensive photoelectric equipment, and tracking of long-distance military targets, battlefield environment perception and three-dimensional positioning of important targets are achieved. Meanwhile, the positioning and mapping of the whole battlefield area can be realized by combining the visual SLAM and the three-dimensional reconstruction technology, and the method has important military value.
Disclosure of Invention
The invention aims to overcome the defects of the traditional sensing technology and provide a target positioning method for carrying a plurality of PTZ cameras on a ground intelligent unmanned platform.
In order to achieve the purpose, the invention adopts the following technical scheme: a target positioning method for carrying a plurality of PTZ cameras on a ground intelligent unmanned platform is characterized in that a sensing system is formed by the PTZ cameras and a vehicle-mounted computing module, wherein the PTZ cameras and the vehicle-mounted computing module are installed on the ground intelligent unmanned platform, the sensing system sends images acquired by the PTZ cameras and self pose and focal length information to the vehicle-mounted computing module, meanwhile, the vehicle-mounted computing module can send instructions to the cameras to control the cameras to rotate and zoom, and an internal parameter matrix K of each PTZ camera is calculatediAnd its rotation matrix R relative to the vehicle body coordinate systemiTranslation vector TiCalculating a transformation matrix H for transforming the target to be positioned from the world coordinate system to the pixel coordinate system of each cameraiSolving the three-dimensional position of the target in the vehicle body coordinate system by using a least square method; the method comprises the following specific steps:
firstly, the position of each PTZ camera in a vehicle body coordinate system is calibrated through measurement, namely a translation vector T of the camera relative to the vehicle body coordinate systemi=(a,b,c)T
Step two, calculating a camera coordinate system O of each PTZ cameracXyz relative to the vehicle body coordinate system Ov-a rotation matrix R of xyzi
Step three, estimating the internal parameter matrix K of each camera under the current zoom multiplei
Step four, calculating a vehicle body coordinate system O of the positioned targetv-conversion of coordinates in xyz to the respective camera pixel coordinate system oi-transformation matrix H of coordinates under uvi
Step five, a target on-vehicle body coordinate system O is solved by using a least square methodv-three-dimensional position P (x) at xyzv,yv,zv)。
Further, the rotation matrix R of each camera coordinate system in the two steps relative to the vehicle body coordinate systemiThe calculating method comprises the following steps:
1) setting the direction of the initial position of the camera coordinate system to be consistent with the direction of the vehicle body coordinate system;
2) acquiring a horizontal rotation angle alpha and a pitching rotation angle beta of each PTZ camera relative to an initial position;
3) setting a direction vector of horizontal rotation of the camera at an initial position under a vehicle body coordinate system as v, and setting a direction vector of pitching rotation as h, and obtaining a rotation matrix:
Figure BDA0003601384850000031
wherein,
Figure BDA0003601384850000032
a rotation matrix representing a rotation of an angle alpha around a rotation vector v,
Figure BDA0003601384850000033
the meaning of the compound is the same as that of the compound,
Figure BDA0003601384850000041
can be obtained from the formula according to rodriegers:
Figure BDA0003601384850000042
can be calculated by the same method
Figure BDA0003601384850000043
Where v, h can be idealized to be consistent with the coordinate axis directions of the respective directions, and I is a third order identity matrix.
Further, step three is that the internal reference matrix K of each cameraiThe estimation method comprises the steps of establishing a functional relation K (f) (z) between a zoom value z and an internal parameter matrix through an early-stage discrete calibration result, and estimating the internal parameter matrix K of the camera according to the current zoom valuei
Further, step four is that the transformation matrix HiThe calculation method comprises the following steps:
Figure BDA0003601384850000044
furthermore, step five, the target on-vehicle body coordinate system O is obtained by using a least square methodv-three-dimensional position P (x) at xyzv,yv,zv) The method comprises the following steps:
(1) acquiring coordinates p of a target located in the pixel coordinate system of each camera imagei(ui,vi) Coordinates P (x) of the object to be positioned in the coordinates of the vehicle bodyv,yv,zv) And pi(ui,vi) The conversion relationship between the two is as follows:
(ui,vi,1)T=Hi(xv,yv,zv,1)T (4)
(2) expanding the transformation relation of each group of the formula (4) to obtain 2i groups of equations, and writing the simultaneous equations into a matrix form to obtain:
A(2i)×3(xw,yw,zw)T=B(2i)×1 (5)
solving the formula (5) by using a least square method to obtain P (x)v,yv,zv) Three-dimensional coordinates of (a).
Further, each PTZ camera has two rotational degrees of freedom in pan and tilt and one zoom degree of freedom.
Furthermore, each PTZ camera can independently rotate to zoom and sense the surrounding environment of the vehicle body, and can also form a stereoscopic vision system in pairs or form a multi-view stereoscopic vision system to realize the three-dimensional positioning of the target.
Has the advantages that: compared with the prior art, the invention aims at the requirements of reconnaissance and accurate positioning of targets with different focal lengths and different distances under various environments by using an intelligent unmanned system, overcomes the defects of the traditional intelligent unmanned system sensor in the aspects of target positioning distance, accuracy and texture information, carries a plurality of PTZ cameras, and performs accurate three-dimensional positioning on the targets by combining the positions of the positioning targets in the images of the cameras according to the positions and postures of the cameras.
Drawings
Fig. 1 is a plan view of four variable-posture zoom cameras mounted.
In the figure: t is a unit ofiFor the position of each camera in the coordinate system of the vehicle body
Detailed Description
So that the manner in which the above recited objects, features and advantages of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to the embodiments thereof which are illustrated in the appended drawings. In addition, the embodiments and features of the embodiments of the present application may be combined with each other without conflict. In the following description, numerous specific details are set forth to provide a thorough understanding of the present invention, and the described embodiments are merely a subset of the embodiments of the present invention, rather than a complete embodiment. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
In various embodiments of the present invention, for convenience in description and not in limitation, the term "coupled" as used in the specification and claims of the present application is not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", and the like are used merely to indicate relative positional relationships, and when the absolute position of the object being described is changed, the relative positional relationships are changed accordingly.
Referring to the attached drawings in detail, the embodiment provides a target positioning method for carrying a plurality of PTZ cameras on a ground intelligent unmanned platform, the PTZ cameras and a vehicle-mounted computing module which are installed on the ground intelligent unmanned platform form a sensing system, the sensing system sends images acquired by the PTZ cameras and self pose and focal length information to the vehicle-mounted computing module, meanwhile, the vehicle-mounted computing module can send instructions to the cameras to control the cameras to rotate and zoom, and an internal reference matrix K of each PTZ camera is calculatediAnd its rotation matrix R relative to the vehicle body coordinate systemiTranslation vector TiCalculating a transformation matrix H for transforming the target to be positioned from the world coordinate system to the pixel coordinate system of each cameraiSolving the three-dimensional position of the target in the vehicle body coordinate system by using a least square method; the method comprises the following specific steps:
firstly, calibrating the position of each PTZ camera in a vehicle body coordinate system through measurement, namely, a translation vector of the camera relative to the vehicle body coordinate system is measured, namely, a vector T of each PTZ camera relative to an origin position under the vehicle body coordinate system is measuredi=(a,b,c)T
Step two, calculating a camera coordinate system O of each PTZ cameracXyz relative to the vehicle body coordinate system Ov-a rotation matrix R of xyzi
Step three, estimating the internal parameter matrix K of each camera under the current zoom multiplei
Step four, calculating a vehicle body coordinate system O of the positioned targetv-conversion of coordinates in xyz to the respective camera pixel coordinate system oi-transformation matrix H of coordinates under uvi
Step five, solving the target sitting on the vehicle body by using a least square methodSystem of symbols Ov-three-dimensional position P (x) at xyzv,yv,zv)。
In a preferred embodiment of the present invention, the rotation matrix R of each camera coordinate system in the two steps relative to the vehicle body coordinate systemiThe calculation method comprises the following steps:
1) coordinate system O of camerac-direction of xyz starting position in relation to the vehicle body coordinate system Ov-the directions of the xyz axes are set to coincide;
2) acquiring a horizontal rotation angle alpha and a pitching rotation angle beta of each PTZ camera relative to an initial position through a built-in SDK of the camera;
3) when an initial position is set, a direction vector of horizontal rotation of the camera under a vehicle body coordinate system is v, a direction vector of pitching rotation is h, and a rotation matrix can be obtained:
Figure BDA0003601384850000061
wherein,
Figure BDA0003601384850000062
a rotation matrix representing a rotation by an angle alpha around a rotation vector v,
Figure BDA0003601384850000063
the meaning of the compound is the same as that of the compound,
Figure BDA0003601384850000064
can be obtained from the formula according to rodriegers:
Figure BDA0003601384850000065
where v, h can be idealized to be consistent with the coordinate axis directions of the respective directions, and I is a third order identity matrix.
The preferable scheme of this embodiment is that, in step three, the internal reference matrix K of each camera isiThe estimation method comprises the steps of establishing a functional relation K (f) (z) between a zoom value z and an internal parameter matrix through an early stage discrete calibration result,estimating an internal reference matrix K of a camera according to a current zoom valueiZ is available from the camera SDK.
The preferred solution of this embodiment is that, in step four, the transformation matrix HiThe calculation method comprises the following steps:
Figure BDA0003601384850000071
preferably, in the present embodiment, the target on-vehicle body coordinate system O is obtained by using the least square method in the step fivev-three-dimensional position P (x) at xyzv,yv,zv) The method comprises the following steps:
(1) acquiring the coordinates p of the positioned target in the pixel coordinate system of each camera imagei(ui,vi) Coordinates P (x) of the object to be positioned in the coordinates of the vehicle bodyv,yv,zv) And pi(ui,vi) The conversion relationship between the two is as follows:
(ui,vi,1)T=Hi(xv,yv,zv,1)T (4)
(2) and (3) expanding the transformation relation of each group of the formula (4) to obtain 2i groups of formula:
Figure BDA0003601384850000072
and (3) combining the simultaneous equations and writing the simultaneous equations into a matrix form:
obtaining:
Figure BDA0003601384850000073
namely:
A(2i)×3(xw,yw,zw)T=B(2i)×1 (7)
solving the formula (7) by using a least square method to obtain P (x)v,yv,zv) Three-dimensional coordinates of (a).
The preferred solution of this embodiment is that each PTZ camera has two rotational degrees of freedom in pan and tilt and one zoom degree of freedom.
The optimal scheme of the embodiment is that each PTZ camera can independently rotate to zoom and sense the environment around the vehicle body, and can also form a stereo vision system in pairs or form a multi-view stereo vision system to realize three-dimensional positioning of the target.
Examples
The method comprises the steps of installing a plurality of PTZ cameras on a ground intelligent unmanned platform, positioning a target, verifying the target positioning method, performing positioning experiments on the target by using two front cameras, positioning the target under different focal lengths and angles of the cameras at a distance of 0-100m and at an interval of 5 m, and comparing a positioning value with a true value. The positioning results are shown in Table 1
TABLE 1 Long-distance target location experimental data
Figure BDA0003601384850000081
Figure BDA0003601384850000091
Within the range of 100m, the positioning distance error of the targets at different distance positions is less than 2 percent.
And positioning the targets under different angles within the range of 10m, wherein the error of the Euclidean distance between the three-dimensional position of the experimental result and the real result is less than 2.8%.
TABLE 2 results of wide-angle target positioning experiments
Figure BDA0003601384850000092
The above detailed description of the target positioning method for multiple PTZ cameras mounted on the intelligent unmanned ground platform with reference to the embodiments is illustrative and not restrictive, and several embodiments may be enumerated according to the limited scope, so that changes and modifications without departing from the general concept of the present invention shall fall within the protection scope of the present invention.

Claims (7)

1. A target positioning method for carrying a plurality of PTZ cameras on a ground intelligent unmanned platform is characterized in that a sensing system is formed by the PTZ cameras and a vehicle-mounted computing module, wherein the PTZ cameras and the vehicle-mounted computing module are installed on the ground intelligent unmanned platform, the sensing system sends images acquired by the PTZ cameras and self pose and focal length information to the vehicle-mounted computing module, meanwhile, the vehicle-mounted computing module can send instructions to the cameras to control the cameras to rotate and zoom, and an internal parameter matrix K of each PTZ camera is calculatediAnd its rotation matrix R relative to the vehicle body coordinate systemiTranslation vector TiCalculating a transformation matrix H for transforming the target to be positioned from the world coordinate system to the pixel coordinate system of each cameraiSolving the three-dimensional position of the target in the vehicle body coordinate system by using a least square method; the method comprises the following specific steps:
step one, calibrating the position of each PTZ camera in a vehicle body coordinate system through measurement, namely, a translation vector T of the camera relative to the vehicle body coordinate systemi=(a,b,c)T
Step two, calculating a camera coordinate system O of each PTZ cameracXyz relative to the vehicle body coordinate system Ov-a rotation matrix R of xyzi
Step three, estimating the internal reference matrix K of each PTZ camera under the current zoom multiplei
Step four, calculating a vehicle body coordinate system O of the positioned targetv-conversion of coordinates in xyz to the respective camera pixel coordinate system oi-transformation matrix H of coordinates under uvi
Step five, a target on-vehicle body coordinate system O is solved by using a least square methodv-three-dimensional position P (x) at xyzv,yv,zv)。
2. The method for positioning the target with a plurality of PTZ cameras on the intelligent ground unmanned platform as claimed in claim 1, wherein the method comprises the following steps: step two each camera seatRotation matrix R of coordinate system relative to vehicle body coordinate systemiThe calculation method comprises the following steps:
1) setting the direction of the initial position of the camera coordinate system to be consistent with the direction of the vehicle body coordinate system;
2) acquiring a horizontal rotation angle alpha and a pitching rotation angle beta of each PTZ camera relative to an initial position;
3) setting a direction vector of horizontal rotation of the camera at an initial position under a vehicle body coordinate system as v, and setting a direction vector of pitching rotation as h, and obtaining a rotation matrix:
Figure FDA0003601384840000011
wherein,
Figure FDA0003601384840000021
a rotation matrix representing a rotation by an angle alpha around a rotation vector v,
Figure FDA0003601384840000022
the meaning of the compound is the same as that of the compound,
Figure FDA0003601384840000023
can be obtained from the formula according to rodriegers:
Figure FDA0003601384840000024
can be calculated by the same method
Figure FDA0003601384840000025
Where v, h can be idealized to be consistent with the coordinate axis directions of the respective directions, and I is a third order identity matrix.
3. The method for positioning the target with a plurality of PTZ cameras on the intelligent ground unmanned platform as claimed in claim 1, wherein the method comprises the following steps: step three, the internal reference matrix K of each cameraiIs estimated byThe method is to establish a functional relation K between a zoom value z and an internal reference matrix through an early-stage discrete calibration resultiF (z), and estimating an internal parameter matrix K of the camera according to the current zoom valuei
4. The method for positioning the target with a plurality of PTZ cameras on the intelligent ground unmanned platform as claimed in claim 1, wherein the method comprises the following steps: step four the transformation matrix HiThe calculating method comprises the following steps:
Figure FDA0003601384840000026
5. the method for positioning the target with a plurality of PTZ cameras on the intelligent ground unmanned platform as claimed in claim 1, wherein the method comprises the following steps: fifthly, a target on-vehicle body coordinate system O is solved by using a least square methodv-three-dimensional position P (x) at xyzv,yv,zv) The method comprises the following steps:
(1) acquiring coordinates p of a target located in the pixel coordinate system of each camera imagei(ui,vi) Coordinates P (x) of the object to be positioned in the coordinates of the vehicle bodyv,yv,zv) And pi(ui,vi) The conversion relationship between the two is as follows:
(ui,vi,1)T=Hi(xv,yv,zv,1)T (4)
(2) expanding the transformation relation of each group of the formula (4) to obtain 2i groups of equations, and writing the simultaneous equations into a matrix form to obtain:
A(2i)×3(xw,yw,zw)T=B(2i)×1 (5)
solving formula (5) by using a least square method to obtain P (x)v,yv,zv) Three-dimensional coordinates of (a).
6. The method for positioning the target with a plurality of PTZ cameras on the intelligent ground unmanned platform as claimed in claim 1, wherein the method comprises the following steps: each of the PTZ cameras has two rotational degrees of freedom in pan and tilt and one zoom degree of freedom.
7. The method for positioning the target with a plurality of PTZ cameras on the intelligent ground unmanned platform as claimed in claim 1 or 6, wherein the method comprises the following steps: each PTZ camera can independently rotate and zoom, sense the surrounding environment of the vehicle body, and also can form a stereoscopic vision system in pairs or form a multi-view stereoscopic vision system to realize the three-dimensional positioning of the target.
CN202210403706.7A2022-04-182022-04-18Target positioning method for carrying multiple PTZ cameras on ground intelligent unmanned platformActiveCN114754743B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202210403706.7ACN114754743B (en)2022-04-182022-04-18Target positioning method for carrying multiple PTZ cameras on ground intelligent unmanned platform

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202210403706.7ACN114754743B (en)2022-04-182022-04-18Target positioning method for carrying multiple PTZ cameras on ground intelligent unmanned platform

Publications (2)

Publication NumberPublication Date
CN114754743Atrue CN114754743A (en)2022-07-15
CN114754743B CN114754743B (en)2024-09-10

Family

ID=82330477

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202210403706.7AActiveCN114754743B (en)2022-04-182022-04-18Target positioning method for carrying multiple PTZ cameras on ground intelligent unmanned platform

Country Status (1)

CountryLink
CN (1)CN114754743B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US8049658B1 (en)*2007-05-252011-11-01Lockheed Martin CorporationDetermination of the three-dimensional location of a target viewed by a camera
WO2012158017A1 (en)*2011-05-132012-11-22Mimos, BerhadMethod and system for multiple objects tracking and display
CN110415278A (en)*2019-07-302019-11-05中国人民解放军火箭军工程大学 Master-slave tracking method of linear moving PTZ camera assisted binocular PTZ vision system
DE102018008979A1 (en)*2018-11-142020-05-14VST Vertriebsgesellschaft für Video-System- und Kommunikationstechnik mbh Autonomous camera tracking and image blending device
CN111461994A (en)*2020-03-302020-07-28苏州科达科技股份有限公司Method for obtaining coordinate transformation matrix and positioning target in monitoring picture
CN113379848A (en)*2021-06-092021-09-10中国人民解放军陆军军事交通学院军事交通运输研究所Target positioning method based on binocular PTZ camera
CN113487677A (en)*2021-06-072021-10-08电子科技大学长三角研究院(衢州)Outdoor medium and long distance scene calibration method of multiple PTZ cameras based on any distributed configuration

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US8049658B1 (en)*2007-05-252011-11-01Lockheed Martin CorporationDetermination of the three-dimensional location of a target viewed by a camera
WO2012158017A1 (en)*2011-05-132012-11-22Mimos, BerhadMethod and system for multiple objects tracking and display
DE102018008979A1 (en)*2018-11-142020-05-14VST Vertriebsgesellschaft für Video-System- und Kommunikationstechnik mbh Autonomous camera tracking and image blending device
CN110415278A (en)*2019-07-302019-11-05中国人民解放军火箭军工程大学 Master-slave tracking method of linear moving PTZ camera assisted binocular PTZ vision system
CN111461994A (en)*2020-03-302020-07-28苏州科达科技股份有限公司Method for obtaining coordinate transformation matrix and positioning target in monitoring picture
CN113487677A (en)*2021-06-072021-10-08电子科技大学长三角研究院(衢州)Outdoor medium and long distance scene calibration method of multiple PTZ cameras based on any distributed configuration
CN113379848A (en)*2021-06-092021-09-10中国人民解放军陆军军事交通学院军事交通运输研究所Target positioning method based on binocular PTZ camera

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
崔智高;邓磊;李艾华;姜柯;周杰;: "采用地平面约束的双目PTZ主从跟踪方法", 红外与激光工程, no. 08, 25 August 2013 (2013-08-25)*

Also Published As

Publication numberPublication date
CN114754743B (en)2024-09-10

Similar Documents

PublicationPublication DateTitle
CN110243283B (en)Visual measurement system and method with variable visual axis
CN104075688B (en)A kind of binocular solid stares the distance-finding method of monitoring system
CN110842940A (en)Building surveying robot multi-sensor fusion three-dimensional modeling method and system
CN102779347B (en)Method and device for tracking and locating target for aircraft
CN113627473A (en)Water surface unmanned ship environment information fusion sensing method based on multi-mode sensor
CN113379848A (en)Target positioning method based on binocular PTZ camera
CN111220126A (en)Space object pose measurement method based on point features and monocular camera
CN106878687A (en) A multi-sensor based vehicle environment recognition system and omnidirectional vision module
CN111260730B (en)Method for calibrating variable visual axis vision system by using reference transmission principle
Deng et al.Long-range binocular vision target geolocation using handheld electronic devices in outdoor environment
CN113028990B (en) A laser tracking attitude measurement system and method based on weighted least squares
CN110930508A (en)Two-dimensional photoelectric video and three-dimensional scene fusion method
CN106127115B (en) A hybrid vision target localization method based on panoramic and conventional vision
CN106157322B (en) A method of camera installation position calibration based on plane mirror
CN206611521U (en)A kind of vehicle environment identifying system and omni-directional visual module based on multisensor
CN108710127B (en)Target detection and identification method and system under low-altitude and sea surface environments
CN112884832A (en)Intelligent trolley track prediction method based on multi-view vision
Wang et al.Corners positioning for binocular ultra-wide angle long-wave infrared camera calibration
CN108458692B (en) A short-range three-dimensional attitude measurement method
CN114973037B (en)Method for intelligently detecting and synchronously positioning multiple targets by unmanned aerial vehicle
CN119758373A (en) A target detection method based on infrared camera and three-dimensional laser radar
Gao et al.A method of spatial calibration for camera and radar
CN114754743B (en)Target positioning method for carrying multiple PTZ cameras on ground intelligent unmanned platform
Liu et al.VSG: Visual Servo Based Geolocalization for Long-Range Target in Outdoor Environment
Wang et al.A mobile stereo vision system with variable baseline distance for three-dimensional coordinate measurement in large FOV

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp