Disclosure of Invention
In view of the above, the present invention provides a method for detecting a moving object of an intelligent pipe gallery based on an optical flow algorithm, including the following steps:
s1, acquiring the Kth frame image and the Kth-i frame image, and calculating an optical flow image of the Kth frame image according to the acquired images; the value range of i is 1-4, and the optical flow image comprises the size of an optical flow value and the direction of the optical flow;
s2, obtaining an optical flow value mask image according to the optical flow image, and removing optical flow scatter points and a connected domain with an area smaller than a preset area threshold value in the optical flow value mask image to obtain a motion area mask image;
s3, updating the tracking point information of the tracked target of the motion area mask image according to the optical flow information in the optical flow image;
s4, calculating a new connected domain of the tracked target according to the updated tracking point information of the tracked target to obtain a new mask image area of the target;
s5, adding and deleting tracking point information in the new mask image area of the target;
s6, updating the tracking information of all tracked targets, merging the mask images of all the targets to obtain a tracked region mask image, and taking the intersection of the tracked region mask image and the moving target region mask image to obtain an untracked region mask image;
s7, adding a new target to be tracked in the untracked area mask image and tracking;
s8, the tracked target information and the new target information are output.
Preferably, in the step S2, the specific process of obtaining the optical flow value mask image according to the optical flow image, and removing the optical flow scatter points and the connected domain with the area smaller than the preset area threshold value in the optical flow value mask image, and obtaining the motion area mask image is as follows:
s21, carrying out binarization processing on the optical flow image to obtain a mask image of the moving target;
s22, based on the optical flow direction histogram, removing optical flow scatter points in the mask image of the moving object;
and S23, calculating connected domains with the pixel value of 255, counting the area of each connected domain, and deleting the connected domains with the areas smaller than a preset area threshold value to obtain a motion region mask image.
Preferably, in the step S21, the specific process of performing binarization processing on the optical flow image to obtain a mask image of the moving object includes:
s211, presetting a light flow value threshold value Tv;
S212, judging whether the optical flow value V (x, y) of the optical flow image at the coordinate (x, y) is less than the threshold value Tv(ii) a If V (x, y) is less than TvThen mask the optical flow value to the value M of the image at coordinate (x, y)o(x, y) is set to 0; otherwise, the optical flow value is masked to the value M of the image at coordinate (x, y)o(x, y) is 255.
Preferably, in the step S22, the specific process of removing the optical flow scatter in the mask image of the moving object based on the optical flow direction histogram includes:
s221, presetting an optical flow direction histogram threshold T;
s222, judging whether the number of optical flow points in each optical flow direction in the optical flow value mask image is smaller than a threshold T of the optical flow direction histogram according to the optical flow direction histogram; and if the number of optical flow points of a certain optical flow direction is less than the threshold value T of the histogram of the optical flow direction, removing the optical flow point information of the optical flow direction.
Preferably, in the step S3, the specific process of updating the tracking point information of the tracked target of the motion area mask image according to the optical flow information in the optical flow image is as follows:
s31, calculating to obtain a target motion direction histogram and a target optical flow direction histogram based on all tracking point information of the target and optical flow information of the current frame image;
s32, according to the target motion direction histogram and the target optical flow direction histogram, counting the optical flow direction of each tracking point to obtain the main optical flow direction, and counting the overall motion direction of each tracking point to obtain the main target motion direction;
s33, setting the tracking points with the inconsistent optical flow direction and the optical flow main direction or the tracking points with the inconsistent motion direction and the target main motion direction as low confidence tracking points, and removing the tracking points if N continuous frames of one tracking point are low confidence tracking points.
Preferably, in the step S4, the specific process of calculating a new connected domain of the tracked target according to the updated tracking point information of the tracked target to obtain a new mask image region of the target includes:
s41, based on the effective tracking point information of the effective target after the low confidence tracking point target is removed from the mask image of the effective target, generating a target tracking area mask image Mt;
S42, calculating a target tracking area mask image M based on the target optical flow directiontNeutral and target optical flow principal direction OfUniform area mask image MaAnd making the area mask image MaAnd a moving region mask image Mo1Obtaining the intersection to obtain the mask image M of the target possible areap;
Mp=Ma∩Mo1
Where t represents a direction threshold preset by a user, and F (x, y) represents a direction of the optical flow at coordinates (x, y);
s43, mask the image M with the target tracking areatOne point above is a seed point, and the image M is masked in the target possible areapFinding connected domain to obtain new mask image area M of targetn。
Preferably, in the step S5, the specific process of adding or deleting the tracking point information in the new mask image region of the target is as follows:
s51, traversing all tracking points of the target, and assuming the coordinates of the tracking points to be (x)t,yt) If M isn(xt,yt) If the value is 0, marking the point as a hidden tracking point; if the continuous N frames of the tracking point are all hidden tracking points, deleting the tracking point;
s52, mask the target tracking area image MtNegated and then new mask image region M with the targetnAnd obtaining a target untracked area mask image Mnt;
S53, mask image M in the non-tracking area of the targetntAnd uniformly taking tracking points in the connected domain, and adding the tracking points into the target tracking point set.
Preferably, in the step S7, the specific process of adding a new target to be tracked to the untracked area mask image and tracking the target is as follows:
s71, mask image M for untracked areanewPerforming morphological processing, calculating a connected region, eliminating a target with the area of the connected region smaller than a preset area threshold value, and obtaining a new untracked region mask image Mnew1;
S72, calculating a new untracked area mask image Mnew1And assigning a new target ID to each connected domain;
and S73, traversing each new target, uniformly taking tracking points on the target mask image and tracking.
Preferably, the specific process of outputting the tracked target information and the new target information is as follows:
and traversing all the tracking targets, counting target information based on the tracking information, and outputting target coordinates and ID information to obtain a moving target detection result of the frame image.
In view of the above objects, the present invention further provides an intelligent piping lane moving object detecting device based on optical flow algorithm, which includes a memory and a processor, wherein the processor is configured to execute the above intelligent piping lane moving object detecting method based on optical flow algorithm based on the instructions stored in the memory.
Compared with the prior art, the intelligent pipe gallery moving target detection method based on the optical flow algorithm effectively removes optical flow scatter points by using the direction and size information of the optical flow, deletes a connected domain with the area smaller than a preset area threshold value, and tracks the target by using the optical flow; different targets can be effectively segmented by combining target tracking information, the target detection rate is effectively improved, and the identification rate of the adhered targets is improved.
The invention can improve the robustness of the optical flow algorithm to illumination and shadow by setting a reasonable binarization threshold value and removing optical flow scatter points; the adhesion rate of the target can be reduced by removing the light flow scattering points and the target tracking information, and the detection rate of the moving target is improved. The intelligent pipe gallery moving object detection method based on the optical flow algorithm can be used for moving object detection items of various intelligent pipe gallery scenes.
Detailed Description
For the purpose of promoting a clear understanding of the objects, aspects and advantages of the embodiments of the present application, reference will now be made to the accompanying drawings and detailed description, wherein like reference numerals refer to like elements throughout.
The illustrative embodiments and descriptions of the present application are provided to explain the present application and not to limit the present application. Additionally, the same or similar numbered elements/components used in the drawings and the embodiments are used to represent the same or similar parts.
As used herein, "first," "second," …, etc., are not specifically intended to mean in a sequential or chronological order, nor are they intended to limit the application, but merely to distinguish between elements or operations described in the same technical language.
With respect to directional terminology used herein, for example: up, down, left, right, front or rear, etc., are simply directions with reference to the drawings. Accordingly, the directional terminology used is intended to be illustrative and is not intended to be limiting of the present teachings.
As used herein, the terms "comprising," "including," "having," "containing," and the like are open-ended terms that mean including, but not limited to.
As used herein, "and/or" includes any and all combinations of the described items.
References to "plurality" herein include "two" and "more than two"; reference to "multiple sets" herein includes "two sets" and "more than two sets".
As used herein, the terms "substantially", "about" and the like are used to modify any slight variation in quantity or error that does not alter the nature of the variation. In general, the range of slight variations or errors that such terms modify may be 20% in some embodiments, 10% in some embodiments, 5% in some embodiments, or other values. It should be understood by those skilled in the art that the aforementioned values can be adjusted according to actual needs, and are not limited thereto.
Certain words used to describe the present application are discussed below or elsewhere in this specification to provide additional guidance to those skilled in the art in describing the present application.
Fig. 1 is a flowchart of a method for detecting a moving object of an intelligent pipe gallery based on an optical flow algorithm in embodiment 1 of the present invention, where the method includes the following steps:
s1, acquiring the K frame image and the K-i frame image, and calculating the optical flow image of the K frame image according to the acquired image.
Wherein the value range of i is 1-4. The optical flow image includes the magnitude of the optical flow value and the optical flow direction.
S2, obtaining an optical flow value mask image according to the optical flow image, removing optical flow scatter points and connected domains with areas smaller than a preset area threshold value in the optical flow value mask image, and obtaining a motion area mask image Mo1The specific process comprises the following steps:
s21, binary processing is carried out on the optical flow image to obtain an optical flow value mask image, namely a mask image M of the moving targeto。
Specifically, the optical flow image of the K-th frame image is subjected to binarization processing to obtain an optical flow value mask image.
The specific process of carrying out binarization processing on the optical flow image comprises the following steps:
s211, presetting an optical flow value threshold Tv;
S212, judging whether the optical flow value V (x, y) of the optical flow image at the coordinate (x, y) is less than the threshold value Tv(ii) a If V (x, y) is less than TvThen mask the optical flow value to the value M of the image at coordinate (x, y)o(x, y) is set to 0; otherwise, the optical flow value is masked to the value M of the image at coordinate (x, y)o(x, y) is set to 255, i.e., the value M of the optical flow value mask image at the coordinates (x, y)o(x, y) is:
s22, removing mask image M of moving object based on optical flow direction histogramoThe specific process of the light stream scattering point in (1), namely, setting 0 to the mask image where the light stream scattering point is located, is as follows:
s221, presetting an optical flow direction histogram threshold value T;
s222, judging whether the number of optical flow points in each optical flow direction in the optical flow value mask image is smaller than a threshold T of the optical flow direction histogram according to the optical flow direction histogram; and if the number of the optical flow points in a certain optical flow direction is less than the threshold value T of the histogram in the optical flow direction, removing the optical flow point information in the optical flow direction, thereby effectively removing the optical flow scatter points and effectively separating the adhered targets.
Specifically, the optical flow scatter is processed using the following optical flow scatter removal formula.
In the formula, M (x, y) represents a value of the optical flow mask image at the coordinate (x, y) after the optical flow scatter is removed; i ═ F (x, y) which represents the direction of the optical flow at the coordinates (x, y); h (I) represents the value of the optical flow direction histogram at direction I.
S23, calculating connected domains with the pixel value of 255, counting the area of each connected domain, and deleting the connected domains with the areas smaller than a preset area threshold value to obtain a mask image of an effective target, namely a moving area mask image Mo1。
And deleting the connected domain with the area smaller than the preset area threshold value in the light stream mask image after the light stream scatter points are removed, so as to obtain the mask image of the effective target.
The mask image of the active target is a mask image containing a plurality of targets. Each connected domain is a candidate target.
It should be noted that deleting a connected component having an area smaller than the preset area threshold is implemented by setting all pixels of the connected component to 0.
S3, updating the tracking point information of the tracked object in the motion area mask image according to the optical flow information in the optical flow image obtained in step S1, wherein the specific process is as follows:
and S31, calculating a target motion direction histogram and a target optical flow direction histogram based on all the tracking point information of the target and the optical flow information of the current frame image.
S32, calculating the optical flow direction of each tracking point according to the target motion direction histogram and the target optical flow direction histogram to obtain the main optical flow direction OfCounting the overall motion direction of each tracking point to obtain the main motion direction O of the targetm。
S33, relating the optical flow direction to the main optical flow direction OfInconsistent tracking point or direction of motion and the main direction of motion O of the targetmInconsistent tracking pointsAll are set as low confidence tracking points, and if N continuous frames of a tracking point are all low confidence tracking points, the tracking point is removed.
S4, calculating a new connected domain of the tracked target according to the updated tracking point information of the tracked target to obtain a new mask image area of the target, wherein the specific process is as follows:
s41, based on the effective tracking point information of the effective target after the low confidence tracking point target is removed from the mask image of the effective target, generating a target tracking area mask image Mt。
S42, calculating target tracking area mask image M based on target optical flow directiontNeutral and target optical flow principal direction OfUniform area mask image MaAnd making the area mask image MaAnd a moving region mask image Mo1Obtaining the intersection to obtain the mask image M of the target possible areap。
Mp=Ma∩Mo1,
In the formula, t represents a direction threshold preset by a user.
S43, mask image M with target tracking areatOne point above is a seed point, and the image M is masked in the target possible areapFinding connected domain to obtain new mask image area M of targetn。
S5, M in the new mask image area of the targetnThe method for adding and deleting the tracking point information comprises the following specific processes:
s51, traversing all tracking points of the target, and assuming the coordinates of the tracking points to be (x)t,yt) If M isn(xt,yt) If the value is 0, marking the point as a hidden tracking point; and if the continuous N frames of the tracking point are all hidden tracking points, deleting the tracking point.
S52, mask the target tracking area image MtNegated and then new mask image region M with the targetnAnd, obtaining the targetTracking area mask image Mnt。
S53, mask image M in target untracked areantAnd uniformly taking tracking points in the connected domain, and adding the tracking points into the target tracking point set.
S6, repeating the steps S3-S5, completing the updating of the tracking information of all the tracked targets, and obtaining the mask image M of the tracked area by taking the union set of the mask images of all the targetsat,MatNegation and moving target area mask image Mo1Taking intersection to obtain mask image M of untracked areanew。
S7, adding a new target to be tracked in the untracked area mask image and tracking, wherein the specific process is as follows:
s71, mask image M for untracked areanewPerforming morphological processing, calculating a connected domain, and removing the target with the small area of the connected domain by using the method of the step S23 to obtain a new untracked region mask image Mnew1。
S72, calculating new untracked area mask image Mnew1And a new target ID is assigned to each connected domain.
And S73, traversing each new target, uniformly taking tracking points on the target mask image and tracking.
And S8, outputting the tracked target information and the new target information.
And traversing all the tracking targets, counting target information based on the tracking information, and outputting target coordinates and ID information to obtain a moving target detection result of the frame image.
The following describes the process of the method for detecting a moving object of a smart tube corridor based on an optical flow algorithm according to embodiment 2 of the present invention with reference to the flowchart of fig. 2 and the images of fig. 3 to 13.
An image is acquired and optical flow information is calculated, wherein the optical flow information comprises an optical flow value and an optical flow direction.
And removing light flow scattering points according to the light flow direction.
And removing the targets with the connected domain areas smaller than the preset area threshold.
Judging whether the current frame image is a first frame image, if so, acquiring each connected region, marking the connected region as a new target for tracking, and outputting a tracking target; otherwise, updating the tracking point information of the tracking target according to the optical flow information.
And carrying out consistency verification on the tracking points.
And eliminating inconsistent tracking points to generate an effective tracking mask image.
And calculating a target connected domain according to the optical flow direction, and adding an untracked area into a tracking point.
And calculating the tracking information of the tracked target according to the tracking information.
And differentiating all the effective tracking mask images with the optical flow value mask image to generate an untracked area.
And finally, inputting all tracking targets.
The K-1 th frame image shown in fig. 3(a) and the K-th frame image shown in fig. 3(b) are acquired, and the optical flow image of the K-th frame image shown in fig. 3(a) is calculated by using the pyramid LK optical flow algorithm.
The optical flow image of the K-th frame image shown in fig. 4(a) is subjected to binarization processing to obtain an optical flow value mask image shown in fig. 4 (b).
The optical flow scatter in the optical flow value mask image shown in fig. 4(b) is removed from the optical flow direction histogram shown in fig. 5(a), and an optical flow mask image with the optical flow scatter removed therefrom shown in fig. 5(b) is obtained.
Deleting the connected domain with the area smaller than the preset area threshold in the optical flow mask image after the optical flow scatter is removed, as shown in fig. 5(b), to obtain the effective target mask image shown in fig. 6 (a).
The target motion direction histogram and the target optical flow direction histogram shown in fig. 6(b) are obtained from all the tracking point information of the target and the optical flow information of the current frame image.
Drawing the unhidden tracking point coordinates from the target motion direction histogram and the target optical flow direction histogram as shown in fig. 6(b) results in the target tracking point image as shown in fig. 7 (a).
The target tracking point mask image shown in fig. 7(b) is obtained by performing dilation processing on the target tracking point image shown in fig. 7 (a).
The optical flow points in a specific direction are removed based on the optical flow direction of the target on the basis of the effective target mask image shown in fig. 6(a), resulting in a target mask image shown in fig. 8 (a).
The target tracking point mask image shown in fig. 7(b) and the target mask image shown in fig. 8(a) are anded to obtain the effective tracking point mask image shown in fig. 8 (b).
The target circumscribed rectangle in the target tracking point mask image shown in fig. 7(b) is loaded on the target mask image shown in fig. 8(a), wherein the gray area in the white area is the central area of the target circumscribed rectangle, resulting in the target tracking information display diagram shown in fig. 9 (a).
Using the target tracking information shown in fig. 9(a) to show the gray area in the white area in the graph as a seed area, a seed filling method is used to obtain the target accurate mask image shown in fig. 9 (b).
The target accurate mask image shown in fig. 9(b) and the target tracking point mask image shown in fig. 7(b) are subjected to and operation to obtain the valid tracking point, the low confidence tracking point, and the invalid tracking point image of the target shown in fig. 10(a), where the left side of the white area (i.e., the area existing in fig. 7(b) but not in fig. 8 (a)) is the area where the low confidence tracking point is located, and the peripheral area of the white area (i.e., the area not existing in N consecutive frames) is the area where the invalid tracking point is located.
The target accurate mask image shown in fig. 9(b) is xored with the valid tracking point mask image shown in fig. 8(b), resulting in the target untracked area image shown in fig. 10 (b).
The image shown in fig. 10(b) is morphologically processed for the region where the target is not tracked, and the image shown in fig. 11(a) is morphologically processed for the region where the target is not tracked.
The tracking point is added to the image of the morphological processing of the untracked area of the target as shown in fig. 11(a) to track the target, and a new tracking point map of the target as shown in fig. 11(b) is obtained.
The target accurate mask image shown in fig. 9(b) is xored with the valid target mask image shown in fig. 6(a), resulting in the target mask image with the tracked target removed as shown in fig. 12 (a).
FIG. 12(b) shows the connected domain of the new target.
As shown in fig. 13, each box in the figure represents a moving object, and the number at the upper left corner of the object box is the object ID.
According to the intelligent pipe gallery moving target detection method based on the optical flow algorithm, the robustness of the optical flow algorithm to illumination and shadow can be improved by setting a reasonable binarization threshold value and removing optical flow scattering points; the adhesion rate of the target can be reduced by removing the light flow scattering points and the target tracking information, and the detection rate of the moving target is improved. The intelligent pipe gallery moving object detection method based on the optical flow algorithm can be used for moving object detection items of various intelligent pipe gallery scenes.
In an exemplary embodiment, the present application further provides an intelligent tube corridor moving object detecting device based on an optical flow algorithm, which includes a memory and a processor coupled to the memory, wherein the processor is configured to execute the intelligent tube corridor moving object detecting method based on the optical flow algorithm in any embodiment of the present application based on instructions stored in the memory.
The memory may be a system memory, a fixed nonvolatile storage medium, or the like, and the system memory may store an operating system, an application program, a boot loader, a database, other programs, and the like.
In an exemplary embodiment, the present application further provides a computer storage medium, which is a computer readable storage medium, for example, a memory including a computer program, which is executable by a processor to perform the method for detecting a moving object of a smart pipe gallery based on an optical flow algorithm in any of the embodiments of the present application.
The foregoing is merely an illustrative embodiment of the present application, and any equivalent changes and modifications made by those skilled in the art without departing from the spirit and principles of the present application shall fall within the protection scope of the present application.