Movatterモバイル変換


[0]ホーム

URL:


CN119758708B - A method for stable target tracking combined with ground speed compensation - Google Patents

A method for stable target tracking combined with ground speed compensation
Download PDF

Info

Publication number
CN119758708B
CN119758708BCN202510256462.8ACN202510256462ACN119758708BCN 119758708 BCN119758708 BCN 119758708BCN 202510256462 ACN202510256462 ACN 202510256462ACN 119758708 BCN119758708 BCN 119758708B
Authority
CN
China
Prior art keywords
image
visible light
infrared
tracking
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202510256462.8A
Other languages
Chinese (zh)
Other versions
CN119758708A (en
Inventor
王宣
白冠冰
周占民
刘成龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun Institute of Optics Fine Mechanics and Physics of CAS
Original Assignee
Changchun Institute of Optics Fine Mechanics and Physics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun Institute of Optics Fine Mechanics and Physics of CASfiledCriticalChangchun Institute of Optics Fine Mechanics and Physics of CAS
Priority to CN202510256462.8ApriorityCriticalpatent/CN119758708B/en
Publication of CN119758708ApublicationCriticalpatent/CN119758708A/en
Application grantedgrantedCritical
Publication of CN119758708BpublicationCriticalpatent/CN119758708B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明涉及无人机目标跟踪技术领域,尤其涉及一种结合地速补偿的目标稳定跟踪的方法。包括测量无人机的飞行速度和方向,确定地速;利用测角系统获取无人机飞行路径上的角度参数;根据地速和角度参数计算地速补偿数值;光电转塔捕获可见光图像和红外图像并进行实时配准融合;根据配准融合后的图像,提取局部对比度和信息熵,并通过融合跟踪方法动态调整可见光图像和红外图像的融合比例,生成融合跟踪图像;将融合跟踪图像输入相关跟踪算法,进行目标跟踪;根据地速补偿的参数进行目标搜索区域偏移;伺服系统将地速补偿参数输入速度闭环控制环路,结合脱靶量对目标进行稳定跟踪。优点在于:实时配准,提高了图像配准效率;提高目标跟踪性能。

The present invention relates to the technical field of unmanned aerial vehicle target tracking, and in particular to a method for stable target tracking combined with ground speed compensation. The method includes measuring the flight speed and direction of the unmanned aerial vehicle to determine the ground speed; using an angle measurement system to obtain angle parameters on the flight path of the unmanned aerial vehicle; calculating the ground speed compensation value according to the ground speed and angle parameters; the photoelectric turret captures visible light images and infrared images and performs real-time registration and fusion; extracts local contrast and information entropy according to the registered and fused images, and dynamically adjusts the fusion ratio of the visible light image and the infrared image through a fusion tracking method to generate a fused tracking image; inputs the fused tracking image into a related tracking algorithm to track the target; performs target search area offset according to the parameters of the ground speed compensation; the servo system inputs the ground speed compensation parameters into the speed closed-loop control loop, and stably tracks the target in combination with the miss amount. The advantages are: real-time registration, improved image registration efficiency; improved target tracking performance.

Description

Target stable tracking method combined with ground speed compensation
Technical Field
The invention relates to the technical field of unmanned aerial vehicle target tracking, in particular to a target stable tracking method combined with ground speed compensation.
Background
In the technical field of unmanned aerial vehicle-mounted photoelectric turrets, a target tracking technology is a core component and directly influences the execution efficiency and accuracy of tasks such as reconnaissance, monitoring and target identification. With the development of unmanned aerial vehicle technology, higher requirements are put on target tracking technology, in particular stability and accuracy in dynamic flight environment. In an unmanned aerial vehicle-mounted photoelectric turret system, achieving stable target tracking is important to improving efficiency and effect of unmanned aerial vehicle executing tasks.
In the prior art, unmanned aerial vehicle on-board photovoltaic turret systems typically include a visible camera and an infrared camera that are capable of providing both a visible image and an infrared image of a target. However, since the unmanned aerial vehicle is affected by the change of the flying speed and the attitude in high-speed flight or maneuvering flight, the calculation capability of the onboard embedded computer is limited, and a certain limitation exists in the aspect of target tracking. Especially in dynamically changing environments, unmanned aerial vehicles need to adjust the orientation of their optoelectronic turrets in real time to keep a stable tracking of the target, which is technically very challenging.
Firstly, when the unmanned aerial vehicle flies at a high speed or maneuvers, the target tracking system is difficult to accurately predict the moving track of the target due to the change of the flying speed and the gesture. Secondly, the existing system often lacks an effective ground speed compensation mechanism, and cannot accurately compensate the movement of the unmanned aerial vehicle relative to the ground, so that tracking stability is affected. In addition, the image registration algorithm in the prior art generally cannot realize real-time processing, and limits the real-time performance and reliability of target tracking.
In practical applications, unmanned aerial vehicle onboard optoelectronic turret systems are required to perform tasks under a variety of environmental conditions, including different flying heights, speeds, and complex meteorological conditions. These factors all present challenges to the performance of the target tracking system. For example, in high-speed flight, the unmanned aerial vehicle needs to quickly adjust the direction of the photoelectric turret to keep stable tracking of the target, and in complex weather conditions such as fog, rain or night, the unmanned aerial vehicle needs to track the target by utilizing multi-mode information such as infrared images.
In order to improve the accuracy and stability of target tracking, researchers have proposed various methods including model-based prediction methods, image processing techniques, machine learning algorithms, and the like. However, these methods still suffer from deficiencies in terms of real-time, accuracy and robustness, especially in unmanned motor-driven flight environments.
Furthermore, prior art image registration algorithms typically require complex ground calibration procedures, including accurate measurement of the camera's boresight center offset, edge distortion, and pixel mapping. These calibration processes are not only time-consuming and labor-consuming, but also difficult to accommodate the application requirements of the zoom camera in the unmanned aerial vehicle on-board turret system.
In summary, the existing unmanned aerial vehicle airborne photoelectric turret system has the problems of poor adaptability of dynamic flight environment, insufficient ground speed compensation, poor image registration instantaneity and the like in the aspect of target tracking. These problems limit the efficiency and effectiveness of the unmanned aerial vehicle in performing tasks in complex environments. Therefore, it is necessary to develop a new stable target tracking method combined with ground speed compensation to improve the tracking performance of the unmanned aerial vehicle on the target in the dynamic flight environment.
Disclosure of Invention
The present invention is directed to solving the above-mentioned problems, and provides a method for stably tracking a target in combination with ground speed compensation.
The invention aims to provide a method for stably tracking a target by combining ground speed compensation, which specifically comprises the following steps:
S1, measuring the flight speed and the flight direction of an unmanned aerial vehicle through an unmanned aerial vehicle-mounted IMU, and determining the ground speed;
S2, mounting a visible light camera and an infrared camera on an unmanned aerial vehicle-mounted photoelectric turret, capturing a visible light image and an infrared image by using the unmanned aerial vehicle-mounted photoelectric turret, and performing real-time registration fusion by using an image registration algorithm;
s3, extracting local contrast and information entropy according to the registered and fused images, dynamically adjusting the fusion proportion of the visible light image and the infrared image through a fusion tracking method to generate a fusion tracking image, inputting the fusion tracking image into a related tracking algorithm to track a target, and carrying out target search area offset according to the parameters of ground speed compensation to ensure that the target is searched with maximum probability;
s4, the servo system inputs the ground speed compensation parameter into a speed closed-loop control loop, and stably tracks the target by combining the off-target quantity of the related tracking.
Preferably, the angle parameters in step S1 include azimuth angle α and pitch angle β;
the calculation method of the ground speed compensation value specifically comprises the following steps:
s101, defining the east speed of the unmanned aerial vehicle by the azimuth angle alphaAnd north speedThe east speed of the unmanned aerial vehicleAnd north speedConversion into velocity components relative to the path of flight of the unmanned aerial vehicle:
;
Where, alpha represents the azimuth angle on the flight path of the unmanned aerial vehicle,Indicating the speed of the east direction,Indicating the north speed;
s102, adjusting a speed component according to a pitch angle beta on a flight path of the unmanned aerial vehicleThe adjusted horizontal velocity component is obtained, and the calculation formula is as follows:
;
In the formula,According to the east speed of the unmanned planeAnd north speedA horizontal velocity component calculated from the azimuth angle alpha and the pitch angle beta; Beta represents a pitch angle on the flight path of the unmanned aerial vehicle;
s103, calculating a ground speed compensation value, wherein the calculation formula is as follows:
;
In the formula,Representing a ground speed compensation value; Is a proportionality constant for adjusting the compensation strength; representing a horizontal velocity component.
Preferably, the step S2 specifically includes the following sub-steps:
S201, ground calibration, namely measuring and calibrating the visual field angle of the visible light in the Y direction under each focal length of the visible light camera and the visual field angle of the infrared light in the Y direction under each focal length of the infrared camera on the ground, recording the relation between the visual field angle and the focal length as a file, and storing the file on a camera control board;
S202, roughly matching the angle of view, namely controlling the movement of a visible light camera or an infrared camera by a camera control board through a PID algorithm to enable the angle of view of a single pixel of a visible light image to be equal to the angle of view of a single pixel of an infrared image;
s203, extracting and registering image descriptors, namely extracting heterogeneous image descriptors of the visible light image and the infrared image, matching the descriptors, storing the projection relation of each pixel coordinate of the visible light image and each pixel coordinate of the infrared image as a matrix A, and calling the matrix A to finish registration.
Preferably, the heterogeneous image descriptors of the visible light image and the infrared image in the step S203 comprise phase consistency descriptors and gradient descriptors, and the matching is performed by adopting a method of weighting the correlation distance, and the specific method is as follows:
S2031, calculating a phase consistency descriptor of the heterologous image, wherein the phase consistency descriptor is used for measuring the phase of a specific frequency component in the heterologous image, and for each pixel, the phase can be calculated by the following formula:
;
wherein, theIs the pixel coordinates, w is a different scale,Is the phase at the scale w, M represents the number of dimensions;
s2032, calculating gradient descriptors of the heterologous images;
gradient direction descriptors of the heterologous image are as follows:
;;
wherein, theAndThe gradients in x-direction and y-direction for the scale m, respectively, I being the brightness of the image, whereby the gradient descriptor of the heterologous image is expressed as:
;
In the formula,A direction representing the gradient, i.e. a direction representing the brightness variation of the image at the point (x, y);
S2033. setting a statistical scale to m, and taking (x, y) pixels as the center, and a gradient histogram of 10×10 pixels around is expressed as follows:
;
s2034 taking the image target size as a reference, respectively taking M/2 scales upwards and downwards, and taking M scales in total, wherein the phase consistency descriptor under each scaleGradient descriptorGradient histogramForming descriptor vectors under M scales;
S2035, matching by adopting a weighted correlation distance method, wherein the expression is as follows:
;
wherein WCD represents a weighted correlation distance, i.e., a sum of weights of differences of the visible light image and the infrared image descriptor vector from scale 1 to scale M; AndRepresenting multi-scale descriptor vectors at an mth scale in the visible light image and the infrared image, respectively; Is the weight of the m-th scale, expressed as:; Is the descriptor vector of the mth scaleAndIs a variance of (2);
When the WCD value is greater than 0.5, it is consideredAndAnd storing the projection relation of each pixel coordinate in the visible light image and the infrared image as a matrix A.
Preferably, the method of adjusting the angle of view of the single pixel of the visible light image to be equal to the angle of view of the single pixel of the infrared image in step S202 is as follows:
When the unmanned aerial vehicle executes a task, if the unmanned aerial vehicle takes a visible lens as a main lens, tracking infrared is fused, the following steps are executed, namely the visible lens is taken as a main lens, when a ground station sends a large-view-field small-view-field instruction, only the visible lens responds, after the ground station is controlled, the visible lens is in a static state, at the moment, the focal length value gCCD of visible light is read out, the infrared focal length of a single pixel under the condition of the same view field is calculated according to the focal length value gCCD of the visible light, the size of a visible pixel and the size of an infrared pixel, and a camera control board controls the repeated motion of the infrared camera through a PID algorithm until the focal length of the visible light is 1/6 of the infrared focal length;
When the unmanned aerial vehicle executes a task, if the unmanned aerial vehicle takes the infrared lens as a main lens, the infrared lens is used as a main lens, when a ground station sends a large-view-field small-view-field instruction, only the infrared lens responds, after the ground station is controlled, the infrared lens is in a static state, at the moment, the focal length value gIR of the infrared lens is read out, according to the focal length value gIR of the infrared lens, the visible light focal length of a single pixel under the condition of the same view field is calculated according to the size of the visible pixel and the size of the infrared pixel, and a camera control board controls the repeated movement of the visible light camera through a PID algorithm until the visible light focal length is 1/6 of the infrared focal length.
Preferably, the fusion tracking method in step S3 specifically includes the following sub-steps:
s301, calculating local contrast of the visible light image and the infrared image, normalizing the local contrast of the visible light image and the infrared image to obtain weight of the local contrast of the visible light image and the infrared image;
S302, calculating information entropy of the visible light image and information entropy of the infrared image, and carrying out normalization processing on the information entropy of the visible light image and the information entropy of the infrared image to obtain weights of the information entropy of the visible light image and the information entropy of the infrared image;
S303, converting an RGB format file of a visible light image into YUV, superposing a Y-channel brightness value Yccd of the visible light image and a Y-channel brightness value Yir of an infrared image according to a matrix A, carrying out fusion tracking pixel by pixel, and generating a fusion tracking image Yfused, wherein the expression is as follows:
;
Wherein Yfused represents a fusion tracking image, yccd represents a Y-channel brightness value of a visible light image, and Yir represents a Y-channel brightness value of an infrared image; representing visible light image weights based on local contrast,Representing the infrared image weights based on local contrast; represents visible light image weights based on information entropy,Representing the infrared image weight based on the information entropy.
Preferably, the step S301 specifically includes the following sub-steps:
s3011, determining a local area taking a feature as a center, applying a mean filter to the local area to obtain local average brightness, calculating standard deviation of pixel values in the local area and local average brightness deviation, and taking the maximum value of the standard deviation as local contrast;
the calculation formulas of the local standard deviation of the visible light image and the infrared image are respectively as follows:
;
;
In the formula,Is the pixel value of the visible image at coordinates (x, y),Is the pixel value of the infrared image at coordinates (x, y),Is the average pixel value of the local area qf in the visible or infrared image, |Ω f| represents the total number of pixels of the local area qf in the visible or infrared image;
the local contrast expression of the visible or infrared image is as follows:
CCDLocalContrast=max();
IRLocalContrast=max();
S3012, carrying out normalization processing on local contrast of the visible light image and the infrared image to enable the sum of CCDLocalContrast and IRLocalContrast to be 1, and obtaining weight of the local contrast of the visible light image and the infrared image:
;
In the formula,Representing visible light image weights based on local contrast,Representing the infrared image weights based on local contrast;
The step S302 specifically includes the following sub-steps:
s3021, representing the number of times a gray value i appears in a visible light image or an infrared image by using a histogram, wherein the gray value is for each pixel point in the gray imageOr (b)The following operations are performed:
;
In the formula,AndRepresenting a visible histogram and an infrared histogram; i represents a gray value; Representing indication function whenWhen (1);Representing the gray value of a pixel point with coordinates (x, y) in the visible image,Gray values of pixel points with coordinates of (x, y) in the infrared image are represented;
s3022, carrying out normalization processing, converting the histogram into probability distribution, and dividing the frequency of each gray value by the total pixel number of the image:
;
;
In the formula,AndThe probability distribution for each gray value of the visible image and the infrared image respectively,AndThe width and height of the visible light image respectively,AndThe width and the height of the infrared image are respectively;
S3023, multiplying the probability of each gray value by the logarithm based on 2 by using a shannon information entropy formula, accumulating all 256 possible gray values, and calculating a visible light image information entropy Hccd and an infrared image information entropy Hir;
;
;
S3024, carrying out normalization processing on information entropy of the visible light image and the infrared image so that the sum of Hccd and Hir is 1, and obtaining weight of the information entropy of the visible light image and the information entropy of the infrared image:
;
In the formula,Represents visible light image weights based on information entropy,Representing the infrared image weight based on the information entropy.
Preferably, the correlation tracking algorithm in step S3 specifically includes the following steps:
s304, initializing a target area, namely determining an initial area of a target in the fused tracking image through a target detection algorithm, wherein the initial area is used as a starting point of the tracking algorithm;
S305, calculating template matching and correlation, namely searching for a target by calculating the correlation between the template and each possible position in the image by using an initial area of the target as the template, wherein the correlation can be calculated by the following formula:
;
wherein R (x, y) is a correlation score at the fused image location (x, y), T (I, j) is the pixel value of the fused template image, I (x+i, y+j) is the pixel value of the target fused image, and μT and μI are the average of the template and the target region, respectively;
s306, peak detection and target positioning, namely determining a new position of the target by searching a local maximum value in the correlation diagram, and if the peak value is higher than a preset threshold value, considering that the target is successfully tracked at the position, and updating a model of the target according to a tracking result.
Preferably, the method for performing the target search area offset according to the parameters of the ground speed compensation in the step S3 specifically includes the following steps:
s307, calculating the distance delta Poffset of the unmanned aerial vehicle moving at the ground speed C in delta t time according to the ground speed compensation value C, wherein the expression is as follows:
S308, determining the center of an initial search area as a target position Pcurrent observed by the unmanned aerial vehicle at present, and moving the search area according to delta Poffset, wherein the center of a new search area is as follows:
;
S309, performing a related tracking algorithm in the new search area, searching for a target with maximum probability, and calculating the target miss distance delta Perror.
Preferably, the step S4 specifically includes the following sub-steps:
s401, the servo system comprises two stages of closed control loops, namely a speed loop and a tracking loop, wherein the speed loop receives the ground speed compensation value C to form a speed loop with the ground speed compensation value;
;
Wherein Delta Pspeed _error represents the angular rate error of the speed loop, namely the difference between the expected angular rate and the actual angular rate, C is a ground speed compensation value, kp_speed is the proportional gain of the speed loop, ki_speed is the integral gain of the speed loop, delta Pspeed _correction is the speed loop control quantity output for completing the speed loop closed loop;
S402, the error input of a tracking loop of the servo system is the off-target quantity delta Perror, and the direction of the photoelectric turret is adjusted according to the off-target quantity delta Perror so as to reduce deviation and realize stable tracking:
;
Where Δpcorrection is the tracking loop control quantity output for completing tracking loop closed loop to achieve accurate tracking of the target, Δperror represents the off-target quantity, and Kp, ki and Kd are the proportional, integral and differential gains, respectively, of the tracking loop of the servo control system.
Compared with the prior art, the invention has the following beneficial effects:
the ground speed compensation mechanism is introduced, so that the tracking stability and accuracy of the unmanned aerial vehicle to the target in a dynamic flight environment are obviously improved.
The image registration algorithm based on the multi-scale descriptor and the optimized search strategy realizes the real-time registration fusion of images and improves the real-time performance of target tracking.
The fusion tracking method combines local contrast and information entropy, dynamically adjusts the fusion proportion of images, maximizes the information quantity of the fused images, and enhances the accuracy and the robustness of target tracking.
The method is suitable for image registration under different resolution ratios and zoom conditions, reduces the workload of registration calibration, and improves the efficiency of image registration.
In conclusion, the visible infrared images are registered in real time and are dynamically fused, the fused images are rich in characteristics, target characteristics can be effectively extracted, target searching area deviation is carried out according to the ground speed compensation parameters, the reliability of a target tracking algorithm is effectively improved, the ground speed compensation parameters are input into a speed closed-loop control loop by a servo system, the targets can be stably tracked by combining with the off-target quantity, and the tracking performance of the unmanned aerial vehicle on the ground targets in a dynamic flight environment is effectively improved. The method is suitable for the visual infrared image registration under the conditions that the visual infrared cameras with different resolutions are the zoom lenses, is particularly suitable for the fields of the image registration of the unmanned aerial vehicle-mounted photoelectric turret variable focal length visual light camera and the variable focal length infrared camera, and the like, has wide application prospect and practical application value, and remarkably reduces the workload of registration calibration.
Drawings
FIG. 1 is a flow chart of a method for target stability tracking in combination with ground speed compensation according to an embodiment of the present invention.
Fig. 2 is a visible light image of an aerial photograph of an unmanned aerial vehicle provided according to an embodiment of the present invention.
Fig. 3 is an infrared image of an aerial photograph of an unmanned aerial vehicle provided in accordance with an embodiment of the present invention.
Fig. 4 is a fused tracking image generated after a visible light image and an infrared image are fused on an unmanned aerial vehicle according to an embodiment of the present invention.
Detailed Description
Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. In the following description, like modules are denoted by like reference numerals. In the case of the same reference numerals, their names and functions are also the same. Therefore, a detailed description thereof will not be repeated.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not to be construed as limiting the invention.
Referring to fig. 1, the invention provides a method for stably tracking a target in combination with ground speed compensation, which specifically comprises the following steps:
S1, measuring the flight speed and the flight direction of an unmanned aerial vehicle through an unmanned aerial vehicle-mounted IMU, and determining the ground speed;
the ground speed compensation is to compensate the motion of the unmanned plane relative to the ground, namely, the speeds of the sky, the east and the north are converted into projections on the ground plane, and the calculation method of the ground speed compensation value specifically comprises the following steps:
Unmanned aerial vehicle has upward velocityEast speedAnd north speedAlpha is an included angle between the azimuth angle of the photoelectric turret and the flight path of the unmanned aerial vehicle, beta is the pitch angle of the photoelectric turret, and the ground speed compensation value is calculated according to the speed component and the angle;
S101, east speedAnd north speedConversion into a velocity component relative to the unmanned aerial vehicle flight path (defined by azimuth angle alpha):
;
Where, alpha represents the azimuth angle on the flight path of the unmanned aerial vehicle,Indicating the speed of the east direction,Indicating the north speed;
s102, adjusting a speed component according to a pitch angle beta on a flight path of the unmanned aerial vehicleThe adjusted horizontal velocity component is obtained by considering the inclination of the unmanned plane relative to the horizontal plane, and the calculation formula is as follows:
;
In the formula,According to the east speed of the unmanned planeAnd north speedThe horizontal velocity component obtained by calculating the azimuth angle alpha and the pitch angle beta of the photoelectric turret; Beta represents a pitch angle on the flight path of the unmanned aerial vehicle;
s103, calculating a ground speed compensation value, wherein the calculation formula is as follows:
;
In the formula,Representing a ground speed compensation value; Is a proportionality constant for adjusting the compensation strength; representing a horizontal velocity component.
The step S1 also comprises inputting the ground speed compensation value into a related tracking algorithm to predict the dynamic position change of the target.
S2, mounting a visible light camera and an infrared camera on an unmanned aerial vehicle-mounted photoelectric turret, capturing a visible light image and an infrared image by using the unmanned aerial vehicle-mounted photoelectric turret (see fig. 2 and 3), and performing real-time registration fusion by an image registration algorithm, wherein the method specifically comprises the following sub-steps:
S201, ground calibration, namely measuring and calibrating the visual field angle of the visible light in the Y direction under each focal length of the visible light camera and the visual field angle of the infrared light in the Y direction under each focal length of the infrared camera on the ground, recording the relation between the visual field angle and the focal length as a file, and storing the file on a camera control board;
The measurement calibration method specifically comprises gradually measuring the Y-direction angle of view of a visible light camera and the Y-direction angle of view of an infrared camera by adopting a foldback type visible infrared double-light common-view-field light pipe (the light pipe contains cross wires and the stepping length is 0.1 m);
The method comprises the steps of setting a focal length of a visible light camera lens to be CCD_F, locking a pitch angle of the photoelectric turret to be 0, aligning the upper side edge of the visible light camera to a cross wire of a light pipe, reading out the azimuth angle CCDup of the photoelectric turret, rotating the photoelectric turret to align the lower side edge of the visible light camera to the cross wire of the light pipe, reading out the azimuth angle CCDdown of the photoelectric turret, recording the corresponding relation between the focal length of the visible light camera and the angle of view to an array c when the focal length of the visible light camera is CCD_F, measuring the corresponding relation between the focal length of the visible light camera and the angle of view to an array d when the focal length of the visible light camera is CCD_Y= CCDup-CCDdown, and recording the corresponding relation between the focal length of the infrared camera and the angle of view to an array d and storing the infrared camera to an unmanned aerial vehicle-mounted photoelectric turret recorder.
S202, rough matching of the angle of view, namely controlling the motion of a visible light camera or an infrared camera by a camera control board through a PID algorithm to enable the angle of view of a single pixel of a visible light image to be equal to the angle of view of a single pixel of an infrared image, wherein the method comprises the following specific operations:
If each pixel of the visible light image and the infrared image has the same field angle, the focal length relationship between the visible lens and the infrared lens is deduced as follows:
Visible light pixel size: Infrared pixel size: The focal length of the visible lens is fvi, and the focal length of the infrared lens is fir;
the angular resolution θ can be expressed by the following equation θ=pixel size/focal length;
the visible light image angle resolution is θvis= dvis/fvis, and the infrared image angle resolution is;
To make θvis equal to θir, the following equation can be established:; I.e. in case of a visible light picture element size of 2.5 μm and an infrared picture element size of 15 μm, i.e. in case of a visible focal length of 1/6 of the infrared focal length, the angle of view of the individual picture elements of the visible light image is equal to the angle of view of the individual picture elements of the infrared image.
The visible light camera and the infrared camera adopt an external triggering mode, so that the exposure of the visible light and the infrared light at the same time is ensured.
The method for adjusting the angle of view of the single pixel of the visible light image to be equal to the angle of view of the single pixel of the infrared image, namely adjusting the visible focal length to be 1/6 of the infrared focal length, comprises the following steps:
When the unmanned aerial vehicle executes a task, if the unmanned aerial vehicle takes a visible lens as a main lens, tracking infrared is fused, the following steps are executed, namely the visible lens is taken as a main lens, when a ground station sends a large-view-field small-view-field instruction, only the visible lens responds, after the ground station is controlled, the visible lens is in a static state, at the moment, the focal length value gCCD of visible light is read out, the infrared focal length of a single pixel under the condition of the same view field is calculated according to the focal length value gCCD of the visible light, the size of a visible pixel and the size of an infrared pixel, and a camera control board controls the repeated motion of the infrared camera through a PID algorithm until the focal length of the visible light is 1/6 of the infrared focal length;
When the unmanned aerial vehicle executes a task, if the unmanned aerial vehicle takes the infrared lens as a main lens, the infrared lens is used as a main lens, when a ground station sends a large-view-field small-view-field instruction, only the infrared lens responds, after the ground station is controlled, the infrared lens is in a static state, at the moment, the focal length value gIR of the infrared lens is read out, according to the focal length value gIR of the infrared lens, the visible light focal length of a single pixel under the condition of the same view field is calculated according to the size of the visible pixel and the size of the infrared pixel, and a camera control board controls the repeated movement of the visible light camera through a PID algorithm until the visible light focal length is 1/6 of the infrared focal length.
S203, extracting and registering image descriptors, namely extracting heterogeneous image descriptors of a visible light image and an infrared image, matching the descriptors, and storing the projection relation of each pixel coordinate of the visible light image and the infrared image as a matrix A;
The heterogeneous image descriptors of the visible light image and the infrared image comprise phase consistency descriptors and gradient descriptors, the method for matching the descriptors is to extract universal characteristic descriptors of the visible light image and the infrared image by combining the phase consistency descriptors and the gradient descriptors and match the universal characteristic descriptors by adopting a method of weighting correlation distances (Weighted Correlation Distance, WCD), and the specific method is as follows:
S2031, calculating a phase consistency descriptor of the heterologous image, wherein the phase consistency descriptor is used for measuring the phase of a specific frequency component in the heterologous image, and for each pixel, the phase can be calculated by the following formula:
;
wherein, theIs the pixel coordinates, w is a different scale,The phase at the scale w, M represents the number of dimensions (typically a value of 10);
s2032, calculating gradient descriptors of the heterologous images;
gradient direction descriptors of the heterologous image are as follows:
;;
wherein, theAndThe gradients in x-direction and y-direction for the scale m, respectively, I being the brightness of the image, whereby the gradient descriptor of the heterologous image is expressed as:
;
In the formula,A direction representing the gradient, i.e. a direction representing the brightness variation of the image at the point (x, y);
S2033. setting a statistical scale to m, and taking (x, y) pixels as the center, and a gradient histogram of 10×10 pixels around is expressed as follows:
;
S2034, taking the image target size as a reference, taking M/2 scales upwards and M/2 scales downwards, wherein the M scales are altogether. The m-th dimensionPhase consistency descriptor belowGradient descriptor (gradient amplitude)Gradient histogramConstitutes a descriptor vector at the mth scale:
;
The visible light image and the infrared image are operated according to the method, and each characteristic point in the obtained visible light image and infrared image respectively generates a rich multi-scale descriptor vectorAndThe descriptor effectively describes local structural information of the visible image and the infrared image and provides a solid basis for feature matching.
S2035, matching by adopting a method of weighting correlation distances (Weighted Correlation Distance, WCD), wherein the expression is as follows:
;
wherein WCD represents a weighted correlation distance, i.e., a weighted sum of differences between the visible image and the infrared image descriptor vector at scale 1 to scale M, M is the scale of the descriptor (M is a value ranging from 1 to M), M represents the number of dimensions (typically 10); AndRepresenting multi-scale descriptor vectors at an mth scale in the visible light image and the infrared image, respectively; Is the weight of the m-th scale, expressed as:; Is the descriptor vector of the mth scaleAndIs a variance of (2);
When the WCD value is greater than 0.5, it is consideredAndAnd storing the projection relation of each pixel coordinate in the visible light image and the infrared image as a matrix A.
The principle is briefly described that after the step S202, the visual field of the visible infrared field is basically consistent in the Y direction in theory, but the registration operation is required because of the calibration error of the visual field angle in the step S1 and the focus control error in the step S202, and the visible infrared field still has slight difference. The calculation of the universal feature descriptors of the visible light image and the infrared image is a key step in the matching of the visible light image and the infrared image, and provides a unique vector representation for each feature point of the visible light image and the infrared image.
The Weighted Correlation Distance (WCD) approach takes into account not only the euclidean distance between feature vectors, but also the correlation between feature dimensions. This approach is particularly applicable where there is an inherent relationship between those feature dimensions.
Because the visible infrared adopts an external triggering mode, the exposure of the visible infrared at the same time is ensured, and after the exposure, a new frame of visible image and infrared image reach an onboard computer, and the real-time registration of the visible infrared can be completed only by calling the matrix A, so that the real-time performance of the registration is effectively ensured.
S3, extracting local contrast and information entropy according to the registered and fused images, dynamically adjusting the fusion proportion of the visible light image and the infrared image through a fusion tracking method to generate a fusion tracking image (figure 4), inputting the fusion tracking image into a related tracking algorithm to track a target, and carrying out target searching area offset according to the parameters of ground speed compensation to ensure that the target is searched with maximum probability;
The fusion tracking method specifically comprises the following steps:
S301, calculating local contrast of a visible light image and an infrared image, carrying out normalization processing on the local contrast of the visible light image and the infrared image to obtain weight of the local contrast of the visible light image and the infrared image, and specifically comprising the following sub-steps:
S3011, determining a local area taking the feature as the center, applying an average filter to the local area to obtain local average brightness, calculating the standard deviation of pixel values in the local area and local average brightness deviation, and taking the maximum value of the standard deviation as local contrast.
Local standard deviationIs a local areaThe measurement of the fluctuation of the inner pixel value, and the calculation formulas of the local standard deviation of the visible light image and the infrared image are respectively as follows:
;
;
In the formula,Is the pixel value of the visible image at coordinates (x, y),Is the pixel value of the infrared image at coordinates (x, y),Is the average pixel value of the local area qf in the visible or infrared image, |qf| represents the total number of pixels of the local area qf in the visible or infrared image.
The local standard deviation is obtained by summing the squares of the deviations of the pixel values from their mean value uf in the local area q f and taking the square root. This value reflects the degree of dispersion of pixel intensities within the feature region and is an indicator of the complexity of the local texture.
The local contrast of a visible or infrared image is defined as the maximum of its local standard deviation:
CCDLocalContrast=max();
IRLocalContrast=max();
S3012, carrying out normalization processing on local contrast of the visible light image and the infrared image to enable the sum of CCDLocalContrast and IRLocalContrast to be 1, and obtaining weight of the local contrast of the visible light image and the infrared image:
;
In the formula,Representing visible light image weights based on local contrast,Representing the infrared image weights based on local contrast.
S302, calculating information entropy of the visible light image and information entropy of the infrared image, and carrying out normalization processing on the information entropy of the visible light image and the information entropy of the infrared image to obtain weight of the information entropy of the visible light image and the information entropy of the infrared image, wherein the method specifically comprises the following sub-steps:
s3021, representing the number of times a gray value i appears in a visible light image or an infrared image by using a histogram, wherein the gray value is for each pixel point in the gray imageOr (b)The following operations are performed:
;
In the formula,AndRepresenting a visible histogram and an infrared histogram; i represents a gray value; Representing indication function whenWhen (1);Representing the gray value of a pixel point with coordinates (x, y) in the visible image,Gray values of pixel points with coordinates of (x, y) in the infrared image are represented;
s3022, carrying out normalization processing, converting the histogram into probability distribution, and dividing the frequency of each gray value by the total pixel number of the image:
;
;
In the formula,AndThe probability distribution for each gray value of the visible image and the infrared image respectively,AndThe width and height of the visible light image respectively,AndThe width and height of the infrared image, respectively.
S3023, multiplying the probability of each gray value by the logarithm based on 2 by using a shannon information entropy formula, accumulating all 256 possible gray values, and calculating a visible light image information entropy Hccd and an infrared image information entropy Hir;
;
;
The information entropy ranges from 0 to a maximum of 8 bits, which reflects the amount of information of the image from completely unordered (the gray values of each pixel are random and the probability is equal) to completely ordered (all pixels are the same gray values), hccd =0, hir=0 when all pixels have the same gray values (i.e. the image is completely uniform), and the information entropy Hccd =8, hir=8, reaches a maximum when the probability of each gray value occurrence is equal.
S3024, carrying out normalization processing on information entropy of the visible light image and the infrared image so that the sum of Hccd and Hir is 1, and obtaining weight of the information entropy of the visible light image and the information entropy of the infrared image:
;
In the formula,Represents visible light image weights based on information entropy,Representing the infrared image weight based on the information entropy.
S303, converting an RGB format file of a visible light image into YUV, superposing a Y-channel brightness value Yccd of the visible light image and a Y-channel brightness value Yir of an infrared image according to a matrix A, carrying out fusion tracking pixel by pixel, and generating a fusion tracking image Yfused, wherein the expression is as follows:
;
Wherein Yfused represents a fusion tracking image, yccd represents a Y-channel brightness value of a visible light image, and Yir represents a Y-channel brightness value of an infrared image; representing visible light image weights based on local contrast,Representing the infrared image weights based on local contrast; represents visible light image weights based on information entropy,Representing the infrared image weight based on the information entropy.
In step S3, the relevant tracking algorithm specifically includes the following steps:
s304, initializing a target area, namely determining an initial area of a target in the fused tracking image through a target detection algorithm, wherein the initial area is used as a starting point of the tracking algorithm;
S305, calculating template matching and correlation, namely searching for a target by calculating the correlation between the template and each possible position in the image by using an initial area of the target as the template, wherein the correlation can be calculated by the following formula:
;
Where R (x, y) is the correlation score at the fused image location (x, y), T (I, j) is the pixel value of the fused template image, I (x+i, y+j) is the pixel value of the target fused image, and μT and μI are the average of the template and target region, respectively.
Because the embedded airborne computer has limited computing capability, the searching of the whole frame of all areas cannot be performed, and the traditional method is to search the limited areas near the appearance position of the target of the previous frame.
S306, peak detection and target positioning, namely determining a new position of the target by searching a local maximum value in the correlation diagram, and if the peak value is higher than a preset threshold value, considering that the target is successfully tracked at the position, and updating a model of the target according to the tracking result, wherein the model comprises the shape, the size and the appearance characteristics of the target.
In step S3, the method for performing the target search area offset according to the parameters of the ground speed compensation specifically includes the following steps:
s307, calculating the distance delta Poffset of the unmanned aerial vehicle moving at the ground speed C in delta t time according to the ground speed compensation value C, wherein the expression is as follows:;
S308, determining the center of an initial search area as a target position Pcurrent observed by the unmanned aerial vehicle at present, and moving the search area according to delta Poffset, wherein the center of a new search area is as follows:
;
S309, performing a related tracking algorithm in the new search area, searching for a target with maximum probability, and calculating the target miss distance delta Perror.
S4, inputting the ground speed compensation parameter into a speed closed-loop control loop by the servo system, and stably tracking a target by combining the off-target quantity of related tracking, wherein the method specifically comprises the following sub-steps:
s401, the servo system comprises two stages of closed control loops, namely a speed loop and a tracking loop, wherein the speed loop receives the ground speed compensation value C to form a speed loop with the ground speed compensation value;
;
The method comprises the steps of determining the angular rate error of a speed loop, namely the difference between the expected angular rate and the actual angular rate, wherein the difference is measured through an angular rate gyroscope arranged in an optoelectronic turret, the angular rate gyroscope can only measure the change of the angular rate of the optoelectronic turret and cannot measure the change of the linear rate generated by the flight of an unmanned aerial vehicle, C is a ground speed compensation value, namely the change of the linear rate generated by the flight of the unmanned aerial vehicle, delta Pspeed _error+C is the angular rate of the optoelectronic turret+the linear rate generated by the flight of the unmanned aerial vehicle, the error of the speed loop of the optoelectronic turret in actual flight can be comprehensively reflected, the accuracy of the response of the speed loop can be ensured to be improved by adding the ground speed compensation value C to the error term Delta Pspeed _error, kp_speed is the proportional gain of the speed loop, ki_speed is the integral gain of the speed loop, and Delta Pspeed _error is the control quantity output of the speed loop and is used for completing the speed loop.
S402, the error input of a tracking loop of the servo system is the off-target quantity delta Perror, and the direction of the photoelectric turret is adjusted according to the off-target quantity delta Perror so as to reduce deviation and realize stable tracking:
;
Where Δpcorrection is the tracking loop control quantity output for completing tracking loop closed loop to achieve accurate tracking of the target, Δperror represents the off-target quantity, and Kp, ki and Kd are the proportional, integral and differential gains, respectively, of the tracking loop of the servo control system.
The method is characterized in that a ground speed compensation mechanism is introduced, the ground speed of the unmanned aerial vehicle is calculated in real time and is used as an input parameter of a tracking algorithm to predict the dynamic change of the target, and the stable tracking of the target is realized. The method comprises the steps of acquiring the flying speed and the flying direction of an unmanned aerial vehicle through a navigation system of the unmanned aerial vehicle, and calculating a ground speed vector according to the azimuth angle and the pitch angle of an onboard photoelectric turret, wherein the ground speed vector is then used for adjusting a related tracking algorithm, so that the target tracking system can accurately predict the change of the target position in the flying process of the unmanned aerial vehicle.
The invention also comprises an image registration algorithm based on the multi-scale descriptor and the optimized search strategy, and the algorithm can run on an onboard computer in real time, so that the accuracy of target tracking is further improved. In addition, a fusion tracking method is provided, and the method combines local contrast and information entropy, and dynamically adjusts the fusion proportion of the visible light image and the infrared image so as to maximize the information quantity of the fusion image.
The invention has the advantages that the invention is not only suitable for image registration under different resolution ratios and varifocal conditions, but also obviously reduces the workload of registration calibration and improves the efficiency of image registration. By means of the fusion tracking technology, accuracy and robustness of target tracking are enhanced, requirements of unmanned aerial vehicle real-time image processing are met, and the method has wide application prospects and practical application values.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (9)

Translated fromChinese
1.一种结合地速补偿的目标稳定跟踪的方法,其特征在于,具体包括如下步骤:1. A method for stable target tracking combined with ground speed compensation, characterized in that it specifically comprises the following steps:S1.通过无人机载IMU测量无人机的飞行速度和飞行方向,确定地速;利用无人机机载光电转塔的测角系统获取无人机飞行路径上的角度参数;根据地速和角度参数,计算地速补偿数值;角度参数包括方位角α和俯仰角β;所述地速补偿数值的计算方法具体包括:S1. The flight speed and flight direction of the UAV are measured by the UAV-mounted IMU to determine the ground speed; the angle parameters on the flight path of the UAV are obtained by using the angle measurement system of the UAV-mounted optoelectronic turret; the ground speed compensation value is calculated according to the ground speed and angle parameters; the angle parameters include the azimuth angle α and the pitch angle β; the calculation method of the ground speed compensation value specifically includes:S101.由方位角α定义无人机的东向速度和北向速度;将无人机的东向速度和北向速度转换为相对于无人机飞行路径的速度分量S101. Define the eastward speed of the drone by the azimuth angle α and northbound speed ; Set the drone's eastward speed and northbound speed Converted to velocity component relative to the UAV flight path : ;式中,α表示无人机飞行路径上的方位角,表示东向速度,表示北向速度;In the formula, α represents the azimuth angle of the UAV flight path, represents the eastward speed, represents the north speed;S102.根据无人机飞行路径上的俯仰角β调整速度分量,得到调整后的水平速度分量,计算公式如下:S102. Adjust the velocity component according to the pitch angle β on the UAV flight path , and the adjusted horizontal velocity component is obtained. The calculation formula is as follows: ;式中,是根据无人机东向速度和北向速度、方位角α和俯仰角β计算得到的水平速度分量;表示速度分量;β表示无人机飞行路径上的俯仰角;In the formula, Based on the eastward speed of the drone and northbound speed , the horizontal velocity component calculated from the azimuth angle α and the pitch angle β; represents the velocity component; β represents the pitch angle on the flight path of the UAV;S103.计算地速补偿数值;计算公式如下:S103. Calculate the ground speed compensation value; the calculation formula is as follows: ;式中,表示地速补偿数值;是比例常数,用于调整补偿强度;表示水平速度分量;In the formula, Indicates the ground speed compensation value; is a proportionality constant used to adjust the compensation intensity; represents the horizontal velocity component;S2.将可见光相机与红外相机均安装在无人机机载光电转塔上,利用无人机机载光电转塔捕获可见光图像和红外图像,通过图像配准算法进行实时配准融合;S2. Install both the visible light camera and the infrared camera on the optoelectronic turret on the drone, use the optoelectronic turret on the drone to capture visible light images and infrared images, and perform real-time registration and fusion through image registration algorithm;S3.根据配准融合后的图像,提取局部对比度和信息熵,并通过融合跟踪方法动态调整可见光图像和红外图像的融合比例,生成融合跟踪图像;将融合跟踪图像输入相关跟踪算法,进行目标跟踪;根据地速补偿的参数进行目标搜索区域偏移,确保以最大概率搜索到目标;S3. Extract local contrast and information entropy based on the registered fused image, and dynamically adjust the fusion ratio of the visible light image and the infrared image through the fusion tracking method to generate a fused tracking image; input the fused tracking image into the relevant tracking algorithm to track the target; offset the target search area based on the parameters of the ground speed compensation to ensure that the target is searched with the maximum probability;S4.伺服系统将地速补偿参数输入速度闭环控制环路,并结合相关跟踪的脱靶量,对目标进行稳定跟踪;伺服系统含两级闭合控制环:速度环和跟踪环;速度环接收地速补偿数值C,形成含地速补偿值的速度环回路;S4. The servo system inputs the ground speed compensation parameter into the speed closed-loop control loop, and combines the miss distance of the relevant tracking to stably track the target; the servo system contains two levels of closed control loops: a speed loop and a tracking loop; the speed loop receives the ground speed compensation valueC to form a speed loop containing the ground speed compensation value; ;式中:ΔPspeed_error表示速度环的角速率误差,即期望角速率与实际角速率之间的差异;C是地速补偿数值;Kp_speed是速度环的比例增益;Ki_speed是速度环的积分增益;ΔPspeed_correction是速度环控制量输出,用于完成速度环闭环。Where: ΔP speed_error represents the angular rate error of the speed loop, that is, the difference between the expected angular rate and the actual angular rate; C is the ground speed compensation value;Kp_ speed is the proportional gain of the speed loop;Ki_ speed is the integral gain of the speed loop; ΔP speed_correction is the speed loop control output, which is used to complete the speed loop closed loop.2.根据权利要求1所述的一种结合地速补偿的目标稳定跟踪的方法,其特征在于:所述步骤S2具体包括如下子步骤:2. The method for stable target tracking combined with ground speed compensation according to claim 1, characterized in that: step S2 specifically comprises the following sub-steps:S201.地面标定:在地面测量标定可见光相机每个焦距下可见光Y方向视场角和红外相机每个焦距下红外Y方向视场角,将视场角与焦距的关系记录为文件,存储到相机控制板上;S201. Ground calibration: calibrate the visible light Y-direction field angle of the visible light camera at each focal length and the infrared Y-direction field angle of the infrared camera at each focal length on the ground, record the relationship between the field angle and the focal length as a file, and store it on the camera control board;S202.视场角粗匹配:相机控制板通过PID算法控制可见光相机或红外相机的运动,使可见光图像单个像元的视场角等于红外图像单个像元的视场角;S202. Coarse matching of field of view angle: The camera control board controls the movement of the visible light camera or the infrared camera through the PID algorithm so that the field of view angle of a single pixel of the visible light image is equal to the field of view angle of a single pixel of the infrared image;S203.图像描述符提取与配准:提取可见光图像与红外图像的异源图像描述符,对描述符进行匹配,将可见光图像与红外图像每个像素坐标的投影关系存储为矩阵A;调用矩阵A完成配准。S203. Image descriptor extraction and registration: extract heterogeneous image descriptors of the visible light image and the infrared image, match the descriptors, store the projection relationship of each pixel coordinate of the visible light image and the infrared image as matrix A; call matrix A to complete the registration.3.根据权利要求2所述的一种结合地速补偿的目标稳定跟踪的方法,其特征在于:所述步骤S203中可见光图像与红外图像的异源图像描述符包括相位一致性描述符和梯度描述符;采用加权相关性距离的方法进行匹配,具体方法如下:3. The method for stable target tracking combined with ground speed compensation according to claim 2 is characterized in that: the heterogeneous image descriptors of the visible light image and the infrared image in step S203 include a phase consistency descriptor and a gradient descriptor; and the weighted correlation distance method is used for matching, and the specific method is as follows:S2031.计算异源图像的相位一致性描述符,用于衡量异源图像中特定频率成分的相位;对于每个像素,相位可以通过以下公式计算:S2031. Calculate the phase consistency descriptor of the heterogeneous image, which is used to measure the phase of a specific frequency component in the heterogeneous image; for each pixel, the phase can be calculated by the following formula: ;其中,是像素坐标,w是不同的尺度,是在尺度w下的相位,M表示尺寸的个数;in, are pixel coordinates, w are different scales, is the phase at scale w, M represents the number of sizes;S2032.计算异源图像的梯度描述符;S2032. Calculate the gradient descriptor of the heterogeneous image;异源图像的梯度方向描述符如下:The gradient direction descriptor of the heterogeneous image is as follows: ; ;其中,分别是尺度为m情况下的x方向和y方向的梯度,𝐼是图像的亮度;由此,异源图像的梯度描述符表示为:in, and are the gradients in the x and y directions when the scale is m, respectively, and 𝐼 is the brightness of the image; thus, the gradient descriptor of the heterogeneous image is expressed as: ;式中,表示梯度的方向,即表示图像在点(x,y)处的亮度变化的方向;In the formula, Indicates the direction of the gradient, that is, the direction of the brightness change of the image at the point (x, y);S2033.设定统计尺度为m,以(x,y)像素为中心,周围10×10个像素的梯度直方图表示如下:S2033. Set the statistical scale to m, and take the pixel (x, y) as the center, and the gradient histogram of the surrounding 10×10 pixels is expressed as follows: ;S2034.以图像目标尺寸为基准,分别向上和向下取M/2个尺度,共M个尺度;每个尺度下的相位一致性描述符、梯度描述符、梯度直方图组成了M个尺度下的描述符向量;S2034. Taking the image target size as the benchmark, take M/2 scales upward and downward respectively, for a total of M scales; the phase consistency descriptor at each scale , gradient descriptor , gradient histogram It forms a descriptor vector at M scales;S2035.采用加权相关性距离的方法进行匹配;表达式如下:S2035. Use the weighted correlation distance method for matching; the expression is as follows: ;式中,WCD表示加权相关性距离,即从尺度1到尺度M下可见光图像与红外图像描述符向量的差异的权重和;m是描述符的尺度;M表示尺寸的个数;分别表示可见光图像和红外图像中的第m个尺度下的多尺度描述符向量;是第m个尺度的权重,表示为:是第m个尺度的描述符向量的方差;Where WCD represents the weighted correlation distance, which is the weighted sum of the differences between the descriptor vectors of the visible light image and the infrared image from scale 1 to scale M; m is the scale of the descriptor; M represents the number of sizes; and Represent the multi-scale descriptor vector at the mth scale in the visible light image and infrared image respectively; is the weight of the mth scale, expressed as: ; is the descriptor vector of the mth scale and The variance of当WCD值大于0.5时,认为匹配成功;将可见光图像与红外图像中每个像素坐标的投影关系存储为矩阵A。When the WCD value is greater than 0.5, it is considered and The match is successful; the projection relationship between the coordinates of each pixel in the visible light image and the infrared image is stored as matrix A.4.根据权利要求2所述的一种结合地速补偿的目标稳定跟踪的方法,其特征在于:所述步骤S202中调节可见光图像单个像元的视场角等于红外图像单个像元的视场角的方法如下:4. The method for stable target tracking combined with ground speed compensation according to claim 2, characterized in that: the method for adjusting the field of view angle of a single pixel of the visible light image to be equal to the field of view angle of a single pixel of the infrared image in step S202 is as follows:无人机执行任务时,若为以可见镜头为主,融合跟踪红外,执行如下步骤:以可见镜头为主镜头,地面站发送视场大、视场小指令时,仅可见镜头响应;地面站控制结束后,可见镜头处于静止状态,此时,读出可见光的焦距值gCCD;根据可见光的焦距值gCCD、可见像元尺寸和红外像元尺寸,计算出单个像元同视场条件下的红外焦距,相机控制板通过PID算法控制红外相机的反复运动,直到可见光焦距是红外焦距1/6;When the UAV performs a mission, if the visible lens is used as the main lens and the infrared is integrated for tracking, the following steps are performed: With the visible lens as the main lens, when the ground station sends a large field of view or small field of view command, only the visible lens responds; after the ground station control is completed, the visible lens is in a stationary state, at this time, the focal length value gCCD of the visible light is read out; according to the focal length value gCCD of the visible light, the size of the visible pixel and the size of the infrared pixel, the infrared focal length of a single pixel under the same field of view is calculated, and the camera control board controls the repeated movement of the infrared camera through the PID algorithm until the focal length of the visible light is 1/6 of the infrared focal length;无人机执行任务时,若为以红外镜头为主,融合跟踪可见镜头,执行如下步骤:以红外镜头为主镜头,地面站发送视场大、视场小指令时,仅红外镜头响应;地面站控制结束后,红外镜头处于静止状态,此时,读出红外镜头的焦距值gIR;根据红外镜头的焦距值gIR,根据可见像元尺寸和红外像元尺寸,计算出单个像元同视场条件下的可见光焦距,相机控制板通过PID算法控制可见光相机的反复运动,直到可见光焦距是红外焦距1/6。When the UAV performs a mission, if the infrared lens is used as the main lens and the visible lens is integrated and tracked, the following steps are performed: with the infrared lens as the main lens, when the ground station sends a large field of view or a small field of view command, only the infrared lens responds; after the ground station control ends, the infrared lens is in a stationary state, at which time, the focal length value gIR of the infrared lens is read out; based on the focal length value gIR of the infrared lens, the visible light focal length of a single pixel under the same field of view is calculated according to the visible pixel size and the infrared pixel size, and the camera control board controls the repeated movement of the visible light camera through the PID algorithm until the visible light focal length is 1/6 of the infrared focal length.5.根据权利要求1所述的一种结合地速补偿的目标稳定跟踪的方法,其特征在于:所述步骤S3中融合跟踪方法具体包括如下子步骤:5. The method for stable target tracking combined with ground speed compensation according to claim 1, characterized in that: the fusion tracking method in step S3 specifically includes the following sub-steps:S301.计算可见光图像和红外图像的局部对比度;将可见光图像和红外图像的局部对比度进行归一化处理,得到可见光图像和红外图像局部对比度的权重;S301. Calculate the local contrast of the visible light image and the infrared image; normalize the local contrast of the visible light image and the infrared image to obtain the weight of the local contrast of the visible light image and the infrared image;S302.计算可见光图像信息熵和红外图像信息熵,将可见光图像和红外图像的信息熵进行归一化处理,得到可见光图像和红外图像信息熵的权重;S302. Calculate the information entropy of the visible light image and the information entropy of the infrared image, normalize the information entropy of the visible light image and the infrared image, and obtain the weights of the information entropy of the visible light image and the infrared image;S303.将可见光图像的RGB格式文件转换为YUV,按矩阵A使可见光图像的Y通道亮度值Yccd与红外图像的Y通道亮度值Yir叠加,逐像素进行融合跟踪,生成融合跟踪图像Yfused,表达式如下:S303. Convert the RGB format file of the visible light image to YUV, superimpose the Y channel brightness valueY ccd of the visible light image and the Y channel brightness valueYir of the infrared image according to matrix A, perform fusion tracking pixel by pixel, and generate a fused tracking imageYfused , which is expressed as follows: ;式中,Yfused表示融合跟踪图像,Yccd表示可见光图像的Y通道亮度值,Yir为红外图像的Y通道亮度值;表示基于局部对比度的可见光图像权重,表示基于局部对比度的红外图像权重;表示基于信息熵的可见光图像权重,表示基于信息熵的红外图像权重。Where,Yfused represents the fused tracking image,Y ccd represents the Y channel brightness value of the visible light image, andY ir represents the Y channel brightness value of the infrared image; represents the visible light image weight based on local contrast, represents the infrared image weight based on local contrast; represents the weight of the visible light image based on information entropy, Represents the infrared image weight based on information entropy.6.根据权利要求5所述的一种结合地速补偿的目标稳定跟踪的方法,其特征在于:所述步骤S301具体包括如下子步骤:6. The method for stable target tracking combined with ground speed compensation according to claim 5, characterized in that: the step S301 specifically comprises the following sub-steps:S3011.确定以特征为中心的局部区域,对局部区域应用均值滤波器以获取局部平均亮度,计算局部区域内像素值与局部平均亮度偏差的标准差,取标准差的最大值为局部对比度;S3011. Determine a local area centered on the feature, apply a mean filter to the local area to obtain the local average brightness, calculate the standard deviation of the deviation between the pixel value in the local area and the local average brightness, and take the maximum value of the standard deviation as the local contrast;可见光图像和红外图像的局部标准差计算公式分别如下:The calculation formulas for the local standard deviation of visible light images and infrared images are as follows: ; ;式中,是可见图像在坐标(x,y)的像素值,是红外图像在坐标(x,y)的像素值,是可见光图像或红外图像中局部区域Ωf的平均像素值,∣Ωf∣表示可见光图像或红外图像中局部区域Ωf的像素总数;In the formula, is the pixel value of the visible image at coordinate (x ,y ), is the pixel value of the infrared image at coordinate (x ,y ), is the average pixel value of the local area Ωf in the visible light image or infrared image, |Ωf | represents the total number of pixels in the local area Ωf in the visible light image or infrared image;可见光图像或红外图像的局部对比度表达式如下:The local contrast expression of a visible light image or infrared image is as follows:CCDLocalContrast=max();CCDLocalContrast=max( );IRLocalContrast=max();IRLocalContrast=max( );S3012.将可见光图像和红外图像的局部对比度进行归一化处理,使得CCDLocalContrast和IRLocalContrast的和为1;得到可见光图像和红外图像局部对比度的权重:S3012. Normalize the local contrast of the visible light image and the infrared image so that the sum of CCDLocalContrast and IRLocalContrast is 1; obtain the weight of the local contrast of the visible light image and the infrared image: ;式中,表示基于局部对比度的可见光图像权重,表示基于局部对比度的红外图像权重;In the formula, represents the visible light image weight based on local contrast, represents the infrared image weight based on local contrast;所述步骤S302具体包括如下子步骤:The step S302 specifically includes the following sub-steps:S3021.利用直方图表示灰度值i在可见光图像或红外图像中出现的次数;对于灰度图像中的每个像素点的灰度值,执行以下操作:S3021. Use a histogram to represent the number of times the gray value i appears in the visible light image or infrared image; for each pixel in the gray image, the gray value or , do the following: ;式中,表示可见直方图与红外直方图;i表示灰度值;表示指示函数;当;当表示可见图像中,坐标为(x,y)的像素点的灰度值,表示红外图像中,坐标为(x,y)的像素点的灰度值;In the formula, and Represents visible histogram and infrared histogram; i represents grayscale value; represents the indicator function; when ;when ; Represents the grayscale value of the pixel with coordinates (x, y) in the visible image. Represents the grayscale value of the pixel with coordinates (x, y) in the infrared image;S3022.进行归一化处理,将直方图转换为概率分布,通过将每个灰度值的频率除以图像的总像素数实现:S3022. Perform normalization to convert the histogram into a probability distribution by dividing the frequency of each gray value by the total number of pixels in the image: ; ;式中,分别为可见光图像和红外图像每个灰度值的概率分布,分别为可见光图像的宽度和高度,分别为红外图像的宽度和高度;In the formula, and are the probability distribution of each gray value of the visible light image and infrared image respectively, and are the width and height of the visible light image, respectively. and are the width and height of the infrared image respectively;S3023.使用香农信息熵公式,将每个灰度值的概率与其以2为底的对数相乘,并对所有256个可能的灰度值进行累加,计算可见光图像信息熵Hccd和红外图像信息熵Hir;S3023. Using the Shannon information entropy formula, multiply the probability of each gray value by its logarithm with base 2, and accumulate all 256 possible gray values to calculate the visible light image information entropyH ccd and the infrared image information entropyH ir; ; ;S3024.将可见光图像和红外图像的信息熵进行归一化处理,使得Hccd和Hir的和为1,得到可见光图像和红外图像信息熵的权重:S3024. Normalize the information entropy of the visible light image and the infrared image so that the sum ofH ccd andH ir is 1, and obtain the weights of the information entropy of the visible light image and the infrared image: ;式中,表示基于信息熵的可见光图像权重,表示基于信息熵的红外图像权重。In the formula, represents the weight of the visible light image based on information entropy, Represents the infrared image weight based on information entropy.7.根据权利要求6所述的一种结合地速补偿的目标稳定跟踪的方法,其特征在于:所述步骤S3中的相关跟踪算法具体包括如下步骤:7. The method for stable target tracking combined with ground speed compensation according to claim 6, characterized in that: the correlation tracking algorithm in step S3 specifically comprises the following steps:S304.目标区域初始化:在融合跟踪图像中,通过目标检测算法确定目标的初始区域,将该初始区域将作为跟踪算法的起始点;S304. Target region initialization: in the fused tracking image, the initial region of the target is determined by the target detection algorithm, and the initial region is used as the starting point of the tracking algorithm;S305.模板匹配与相关性计算:使用目标的初始区域作为模板,通过计算模板与图像中每个可能位置的相关性来搜索目标;相关性可以通过以下公式计算:S305. Template matching and correlation calculation: Use the initial area of the target as a template and search for the target by calculating the correlation between the template and each possible position in the image; the correlation can be calculated by the following formula: ;其中,R(x,y)是在融合图像位置(x,y)的相关性得分,T(i,j)是融合模板图像的像素值,I(x+i,y+j)是目标融合图像的像素值,μTμI分别是模板和目标区域的均值;WhereR (x ,y ) is the correlation score at the fused image position (x ,y ),T (i ,j ) is the pixel value of the fused template image,I (x +i ,y +j ) is the pixel value of the target fused image,μT andμI are the means of the template and target regions, respectively;S306.峰值检测与目标定位:在相关性图中,通过寻找局部最大值来确定目标的新位置;如果峰值高于预设的阈值,则认为目标在该位置被成功跟踪,根据跟踪结果更新目标的模型。S306. Peak detection and target positioning: In the correlation graph, the new position of the target is determined by finding the local maximum; if the peak value is higher than the preset threshold, the target is considered to be successfully tracked at this position, and the target model is updated according to the tracking result.8.根据权利要求7所述的一种结合地速补偿的目标稳定跟踪的方法,其特征在于:所述步骤S3中根据地速补偿的参数进行目标搜索区域偏移的方法具体包括如下步骤:8. The method for stable target tracking combined with ground speed compensation according to claim 7, characterized in that: the method for performing target search area offset according to the ground speed compensation parameters in step S3 specifically comprises the following steps:S307.根据地速补偿数值C计算无人机在Δt 时间内以地速C移动的距离ΔPoffset,表达式为:S307. Calculate the distance ΔP offset that the UAV moves at the ground speedC within the time Δt according to the ground speed compensation valueC. The expression is:S308.确定初始搜索区域中心为无人机当前观测到的目标位置Pcurrent;将搜索区域根据ΔPoffset进行移动,新的搜索区域中心为:S308. Determine the center of the initial search area as the target positionPcurrent currently observed by the drone; move the search area according to ΔPoffset , and the new search area center is: ;S309.在新的搜索区域内执行相关跟踪算法,以最大概率搜索到目标,并计算出目标脱靶量ΔPerrorS309. Execute the correlation tracking algorithm in the new search area to search for the target with the maximum probability, and calculate the target miss distance ΔPerror .9.根据权利要求1所述的一种结合地速补偿的目标稳定跟踪的方法,其特征在于:所述步骤S4还包括如下子步骤:9. The method for stable target tracking combined with ground speed compensation according to claim 1, characterized in that: the step S4 further comprises the following sub-steps:伺服系统的跟踪环的误差输入是脱靶量ΔPerror;根据脱靶量ΔPerror调整光电转塔的指向,以减少偏差并实现稳定跟踪:The error input of the tracking loop of the servo system is the miss distance ΔPerror ; the pointing direction of the optoelectronic turret is adjusted according to the miss distance ΔPerror to reduce the deviation and achieve stable tracking: ;式中,ΔPcorrection是跟踪环控制量输出,用于完成跟踪环闭环,以实现对目标的精确跟踪,ΔPerror表示脱靶量,KpKiKd分别是伺服控制系统的跟踪环的比例、积分和微分增益。WhereΔPcorrection is the tracking loop control output, which is used to complete the closed loop of the tracking loop to achieve accurate tracking of the target.ΔPerror represents the miss distance.Kp ,Ki andKd are the proportional, integral and differential gains of the tracking loop of the servo control system, respectively.
CN202510256462.8A2025-03-052025-03-05 A method for stable target tracking combined with ground speed compensationActiveCN119758708B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202510256462.8ACN119758708B (en)2025-03-052025-03-05 A method for stable target tracking combined with ground speed compensation

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202510256462.8ACN119758708B (en)2025-03-052025-03-05 A method for stable target tracking combined with ground speed compensation

Publications (2)

Publication NumberPublication Date
CN119758708A CN119758708A (en)2025-04-04
CN119758708Btrue CN119758708B (en)2025-06-24

Family

ID=95186085

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202510256462.8AActiveCN119758708B (en)2025-03-052025-03-05 A method for stable target tracking combined with ground speed compensation

Country Status (1)

CountryLink
CN (1)CN119758708B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111223133A (en)*2020-01-072020-06-02上海交通大学Registration method of heterogeneous images
CN114241009A (en)*2021-12-242022-03-25普宙科技(深圳)有限公司Target tracking method, system, storage medium and electronic equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106525238B (en)*2016-10-272018-08-03中国科学院光电研究院A kind of satellite-borne multispectral imaging system design method based on super-resolution rebuilding
CN107240096A (en)*2017-06-012017-10-10陕西学前师范学院A kind of infrared and visual image fusion quality evaluating method
CN109584193A (en)*2018-10-242019-04-05航天时代飞鸿技术有限公司A kind of unmanned plane based on target preextraction is infrared and visible light image fusion method
CN109947123B (en)*2019-02-272021-06-22南京航空航天大学 A UAV path tracking and autonomous obstacle avoidance method based on sight guidance law
CN115167510A (en)*2022-07-222022-10-11中国人民解放军军事科学院国防科技创新研究院 A Method for Estimating the Remaining Average Velocity of a Variable-Speed Missile and Controlling the Flight Time
CN116433733B (en)*2023-01-182025-09-19合肥工业大学Registration method and device between visible light image and infrared image of circuit board

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111223133A (en)*2020-01-072020-06-02上海交通大学Registration method of heterogeneous images
CN114241009A (en)*2021-12-242022-03-25普宙科技(深圳)有限公司Target tracking method, system, storage medium and electronic equipment

Also Published As

Publication numberPublication date
CN119758708A (en)2025-04-04

Similar Documents

PublicationPublication DateTitle
CN109708649B (en) A method and system for determining the attitude of a remote sensing satellite
CN116679314A (en)Three-dimensional laser radar synchronous mapping and positioning method and system for fusion point cloud intensity
CN109900274B (en) An image matching method and system
CN119594989B (en) An adaptive navigation system for unmanned aerial vehicles
CN119762557B (en) A method for registration and fusion of visible light and SAR images in dynamic flight of UAV
CN114689030A (en)Unmanned aerial vehicle auxiliary positioning method and system based on airborne vision
CN115618749A (en)Error compensation method for real-time positioning of large unmanned aerial vehicle
CN115950435A (en)Real-time positioning method for unmanned aerial vehicle inspection image
Qiu et al.High-precision visual geo-localization of UAV based on hierarchical localization
CN115471555A (en) A Pose Determination Method for UAV Infrared Inspection Based on Image Feature Point Matching
CN112346485B (en)Photoelectric tracking control method, system, electronic equipment and storage medium
CN118660223A (en) A pod drone video anti-shake method and system
CN119758708B (en) A method for stable target tracking combined with ground speed compensation
CN119620767A (en) A method and system for automatically adjusting laser direction based on unmanned aerial vehicle
CN110738706B (en)Rapid robot visual positioning method based on track conjecture
Zhang et al.An UAV navigation aided with computer vision
CN118687561A (en) A UAV scene matching positioning method and system based on weak light image enhancement
CN112198884A (en)Unmanned aerial vehicle mobile platform landing method based on visual guidance
CN114842224B (en) An absolute visual matching positioning method for monocular UAV based on geographic base map
CN119762541B (en) Fusion tracking method of visible light and infrared images in UAV dynamic flight
CN119762558B (en) A method for registering and fusing infrared images and SAR images in dynamic flight of unmanned aerial vehicles
Wei et al.Synthetic velocity measurement algorithm of monocular vision based on square-root cubature Kalman filter
CN119762556B (en) Visible light and infrared image registration and fusion method for UAV optoelectronic turret
CN119762364B (en)Visible light image enhancement method combining infrared characteristics
CN120252746B (en)Remote long-endurance integrated navigation method and system combining visual inertia joint optimization and image matching

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp