Disclosure of Invention
The invention provides a color space construction method with illumination robustness for visual tracking, which can be directly applied to a traditional visual tracking method based on characteristic distribution and better solves the problem that the method is easy to generate tracking drift phenomenon under the condition that the illumination condition is obviously changed.
The technical solution for realizing the purpose of the invention is as follows: a color space construction method with illumination robustness for visual tracking comprises the following steps:
firstly, according to a quantitative conversion formula of RGB and HSI space, a method for carrying out interframe maintenance on H components is provided, namely, hue maintenance is carried out;
secondly, dividing image pixel points into two categories according to the existence of color information, namely color pixel points and gray pixel points, respectively correcting and constraining illumination sensitive components of the image pixel points in an HSI space, and giving a specific operation method implemented in an RGB space, thereby constructing a new color space with robustness to illumination;
and finally, applying the established new color space to a classic visual tracking algorithm, and proving that the new color space improves the illumination change resistance of the traditional tracking algorithm.
Compared with the prior art, the invention has the following remarkable advantages: (1) under the condition that the intensity of a light source changes, the traditional tracking algorithm can keep higher stability and precision, only linear change is carried out on a color space, modification of the tracking algorithm is not involved, the overall calculation complexity of the method is greatly reduced, and the method has the characteristics of low calculation complexity and good real-time performance. (2) On one hand, the influence of illumination change on the characteristics of a target image is explained from the optical essence, on the other hand, components sensitive to illumination in a color space are corrected from the source, and the stability and the precision of the existing tracking algorithm in an illumination change environment are greatly improved, so that the illumination robustness is extremely high. (3) The method carries out interframe maintenance on color components (namely hue components) which can reflect the real color characteristics of a target and are insensitive to illumination change, and carries out artificial control and correction on the sensitive components, thereby eliminating the influence of the illumination change on the color space and providing a new color space with illumination robustness.
The invention is further described below with reference to the accompanying drawings:
Detailed Description
With reference to fig. 1, the method for constructing a color space with illumination robustness for visual tracking according to the present invention includes the following steps:
firstly, on the basis of keeping the inter-frame hue information of each pixel point unchanged, a method for keeping the H component between frames is provided according to a quantitative conversion formula of RGB and HSI (hue-saturation-brightness) spaces, and hue keeping is carried out, namely, translation and scale transformation are utilized to carry out linear transformation on RGB spatial color components, so that the hue H component in the corresponding HSI space is kept unchanged, and constraint conditions required to be followed by correcting the saturation (S) and brightness (I) components can be simultaneously carried out.
Wherein the conventional RGB color space is converted to the HSI color space: for any 24-bit RGB color space display based color digital image, R, G, B three components need to be normalized to the [0,1] interval range first, that is: divide these three components by 255 respectively; the H, S, I component in its corresponding HSI color space can then be calculated according to the following formula:
in the formula, R, G, B represents the red, green, and blue components of a pixel, H, S, I represents the hue, saturation, and luminance components of a pixel, arctan (·) represents an arctangent operation, and min (·) represents a minimum operation.
The method for maintaining the color tone comprises the following steps: firstly, according to the mathematical relationship between H, S, I and R, G, B in formula (1), on the right side of the equal sign of the three equations, if the R, G, B three components are shifted or scaled simultaneously, that is, the three components are added with the same constant or multiplied by the same constant which is not 0, the value of H is not changed; then, if a certain pixel has an H component, the values of the S and I components can be changed only by translation and scale transformation, and the value of the H component can be ensured to be unchanged; finally, using the same translation factor or scale factor, the R, G, B of the pixel point is translated or scaled at the same time, i.e. translation and scale are performed, so as to change the corresponding S and I component values and ensure that the H component value is constant.
Secondly, dividing image pixel points into two categories according to the existence of color information, namely color pixel points and gray pixel points, respectively correcting and constraining illumination sensitive components of the image pixel points in an HSI space (correcting and constraining color components S and I components which can change along with illumination except for an H component in the HSI space so that H, S, I components of each pixel keep illumination robust characteristics in continuous frames), and providing a specific operation method implemented in an RGB space, thereby constructing a new color space with robustness to illumination. Since the hue H is robust to illumination, when the red (R), green (G), and blue (B) components are corrected, it is necessary to ensure that their corresponding H components remain unchanged before and after the conversion. That is, only R, G, B of each pixel point of the whole image is translated and scaled, and the corresponding S, I is forced to change in value, while the H value remains unchanged.
Correcting the S component of the color pixel: firstly, based on the fact that the illumination change can change the pixel saturation value, the S component of the color pixel is uniformly corrected to be 1, namely the saturation of the pixel is restrained, so that the saturation interference is eliminated; then, the foregoing translation operation is performed, a translation factor δ is set, and the S component is corrected using the following formula:
wherein R, G, B represents the red, green and blue components of a pixel, H, S, I represents the hue, saturation and brightness components of a pixel, and min (-) represents the minimum operation (min represents the minimum operation); finally, by observing the formula (2) and combining the conditional constraint that the three component values of the color pixel R, G, B cannot be all equal, the value of δ is obtained:
δ=-min(R,G,B) (3)
in the formula, min (·) represents a minimum value operation, namely, the original R, G, B three components are respectively subtracted by the minimum values, so that saturation correction is completed.
Correcting the I component of the color pixel point, namely, firstly, uniformly correcting the I component of the color pixel point to be a constant α based on the fact that the illumination change can change the pixel brightness value, namely, restricting the pixel brightness so as to eliminate the brightness interference, then, carrying out the scale scaling operation, setting a scale factor β, and correcting the I component by using the following formula:
wherein R, G, B represents the red, green, and blue components of a pixel, H, S, I represents the hue, saturation, and brightness components of a pixel, δ is a translation factor, and α is a constantβ, and finally, solving the value of the scale factor according to the formula (4):
in the formula (I), the compound is shown in the specification,
i.e., the R, G, B components are multiplied by β, respectively, to complete the luminance correction.
Carrying out unified mapping on the gray pixel points: firstly, based on the fact that gray pixel points R, G, B have no H information and S is 0, the conclusion that only the I component of the gray pixel points changes along with illumination and the illumination influence can be eliminated only by correcting the I component of the gray pixel points is obtained; then, the I component is artificially set as a constant in accordance with the content of the step of correcting the S component of the color pixel
That is, all the gray pixels on the brightness axis of the RGB space are mapped to the same point, and thus the new color space is constructed.
And finally, replacing the RGB color space with the established new color space, applying the new color space to a classic/traditional visual tracking algorithm based on characteristic distribution, and proving that the new color space can obviously improve the illumination change resistance of the traditional tracking algorithm without any correction and adjustment on the algorithm. The direct application of the new color space to the visual tracking algorithm: firstly, selecting four typical classic visual tracking algorithms based on feature distribution, namely a Mean Shift (MS) algorithm, a Particle Filter (PF) algorithm, a continuous adaptive mean shift (CS) algorithm and a mixed mean shift-particle filter (MSPF) algorithm as testing tools; then, under the scene of continuous change of illumination, a color CCD camera is adopted to shoot a total of 200 frames of images of a moving red model trolley as a test video; then, the RGB color space is used as the feature space of the four tracking algorithms to track the red trolley; thirdly, replacing the RGB space by the color space, tracking the trolley by using the same four algorithms, and respectively displaying the tracking results of the same algorithm in the RGB color space and the color space of the invention on the same graph to obtain 4 tracking result comparison graphs as shown in figures 2-1 to 2-4; finally, the accuracy of each tracking result is measured by using a quantitative index CR, wherein the CR is defined as:
in the formula, ACRepresenting the standard tracking result, A, of the original image, previously marked by an operatorRThe actual tracking result obtained by the tracking algorithm is shown, ∩ shows the overlapping area of the two areas, the higher the CR is, the closer the tracking result is to the standard result, and simultaneously, CR curves of all methods in 200 frames of video images are drawn to obtain a graph 3.