Disclosure of Invention
The embodiment of the invention provides a moving target detection method with simple algorithm and small data processing amount, and the device adopting the moving target detection method has low cost and high running speed.
The detection method of the moving target comprises the following steps: acquiring a key monitoring area selected by a user from a monitoring scene; establishing a background model for the key monitoring area; and extracting a foreground image of the moving target from the image frame of the monitoring scene by referring to the background model.
Wherein the step of establishing a background model for the key monitoring area comprises: establishing a background model for the key monitoring area by adopting a single Gaussian background modeling method, wherein the single Gaussian background modeling is divided into two steps:
step A, acquiring a section of the acquired fixed background video of the key monitoring area, and establishing a video sequence B for the video
0Estimation, i.e. obtaining the mean value mu of the luminance of each pixel
0Sum variance
T is the time length corresponding to the video sequence, and (x, y) is the video pixel value, and an initial background model is established;
step B, updating the model according to the video frame input each time,
α is the update rate, K
0Between 0 and 1.
The step of extracting a foreground image of a moving object from the image frames of the monitored scene with reference to the background model comprises: acquiring image frames of the key monitoring areas from the image frames of the monitoring scene; finding out an area image with different pixels from the background model from the image frame of the key monitoring area; and determining the area image with different pixels as a foreground image of the moving object.
Wherein, with reference to the background model, the step of extracting a foreground image of a moving object from the image frames of the monitored scene comprises: and performing morphological filtering on the foreground image of the moving target.
The step of extracting a foreground image of a moving object from the image frame of the monitored scene with reference to the background model further includes: when the foreground image is detected to have the shadow, eliminating the shadow in the foreground image by adopting a color model-HSV (Hue, Saturation and Value) color model shadow elimination algorithm based on Hue, Saturation and brightness; the HSV color model shadow elimination algorithm is divided into the following steps:
setting f (x, y) as the value of the current motion area pixel and g (x, y) as the value of the background model pixel;
step D, converting the RGB color model-red, green and blue color model into an HSV color model:
max=max(R,G, B)
min=min(R,G, B)
ifR=max,H=(G-B)/(max-min)
if G=max,H=2+(B-R)/(max-min)
if B=max,H=4+(R-G)/(max-min)
H=H*60
If H<0,H=H+360
V=max(R,G, B)
S=(max–min)/max
acquiring a brightness value V (f (x, y)), a color value H (f (x, y)), a saturation value S (f (x, y)), and a brightness value V (g (x, y)), H (g (x, y)), S (g (x, y)) of a background model of a current motion area;
E. setting a threshold value U, if | V (f (x, y)) -V (g (x, y)) | < U, defining the f (x, y) point at the moment as a shadow point, removing the value of the pixel of the shadow point from the values of the pixels in the current motion area, and removing the shadow point;
F. the pixels that set the shadow of the moving object become white.
Correspondingly, the invention also provides a device for realizing the moving target detection method, which comprises the following steps: the acquisition module is used for acquiring a key monitoring area selected by a user from a monitoring scene; the model establishing module is used for establishing a background model for the key monitoring area; and the extraction module is used for extracting a foreground image of the moving target from the image frame of the monitoring scene by referring to the background model.
The model building module builds a background model for the key monitoring area by adopting a single Gaussian background modeling method. The extraction module comprises: a monitoring area image obtaining unit, configured to obtain an image frame of the key monitoring area from image frames of the monitored scene; the searching unit is used for searching an area image which has different pixels from the background model from the image frame of the key monitoring area; and the determining unit is used for determining the area image with different pixels as a foreground image of the moving target.
The moving object detecting apparatus further includes: and the filtering module is used for performing morphological filtering on the foreground image of the moving target.
Wherein the moving object detecting device further includes: and the shadow removing module is used for removing the shadow in the foreground image by adopting a shadow elimination algorithm based on an HSV color model when the shadow in the foreground image is detected.
The invention has the beneficial effects that: the method for detecting the moving target comprises the steps of obtaining a key monitoring area selected by a user from a monitoring scene, establishing a background model for the key monitoring area, and extracting a foreground image of the moving target from an image frame of the monitoring scene by referring to the background model to realize a monitoring effect. The method does not need to consider interference factors outside a key monitoring area, so that the algorithm is simple and the data processing amount is small. The device for realizing the method has the advantages of low cost and high running speed due to simple algorithm and small data processing amount of the detection method of the moving target.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart illustrating a moving object detection method according to a first embodiment of the present invention. The method comprises the following steps:
and step S11, acquiring the important monitoring area selected by the user from the monitoring scene.
In general, a moving object always appears in a certain partial area in a monitored scene instead of all monitored areas, and therefore, the partial area can be classified as a key monitored area. Step S11 is to obtain the key monitoring area selected by the user by prompting the user to identify the key monitoring area on the monitoring image of the monitoring scene in a dashed box.
And step S12, establishing a background model for the important monitoring area.
Background model building methods fall into two broad categories: one is that a monitoring image (namely a foreground image) without a moving target is shot as a background model, and the method is only suitable for indoor monitoring, because the background image of a monitoring scene in a real environment is constantly changed, such as illumination, wind blown leaves and the like, which can bring serious interference; the other type is a Gaussian background modeling method which adopts a continuously refreshed background model to adapt to the environmental change of a monitored scene. The gaussian background modeling method is further classified into a single gaussian background modeling method and a mixed gaussian background modeling method. The single Gaussian background modeling method is suitable for the condition that background pixels are not changed much, namely, noise is not much, and colors are concentrated; the mixed Gaussian background modeling method has strong robustness to dynamic changes of a monitored scene, and is suitable for target detection in a complex scene.
In step S13, a foreground image of the moving object is extracted from the image frame of the monitored scene with reference to the background model.
According to the embodiment of the invention, the key monitoring area selected by the user from the monitoring scene is obtained, so that the monitoring range is greatly reduced; a background model is established for a key monitoring area, but not all monitoring areas, so that interference factors such as wind, rain, snow, leaf shaking and the like are relatively determined. The embodiment of the invention has simple algorithm and small data processing amount.
Fig. 2 is a flowchart illustrating a moving object detecting method according to a second embodiment of the present invention. The method comprises the following steps:
and step S21, acquiring the important monitoring area selected by the user from the monitoring scene.
In general, a moving object always appears in a certain partial area in a monitored scene instead of all monitored areas, and therefore, the partial area can be classified as a key monitored area. Step S21 is to obtain the key monitoring area selected by the user by prompting the user to identify the key monitoring area on the monitoring image of the monitoring scene in a dashed box.
And step S22, establishing a background model for the key monitoring area by adopting a single Gaussian background modeling method.
Background model establishing methods are classified into two categories, one is that a monitoring image (namely a foreground image) without a moving target is shot as a background model, and the method is only suitable for indoor monitoring because the background image of a monitoring scene in a real environment is continuously changed, such as illumination, wind blown leaves and the like, and serious interference is caused; the other type is a Gaussian background modeling method which adopts a continuously refreshed background model to adapt to the environmental change of a monitored scene. The gaussian background modeling method is further classified into a single gaussian background modeling method and a mixed gaussian background modeling method. The single Gaussian background modeling method is suitable for the condition that background pixels are not changed much, namely, noise is not much, and colors are concentrated; the mixed Gaussian background modeling method has strong robustness to dynamic changes of a monitored scene, and is suitable for target detection in a complex scene. Since step S21 has excluded a significant portion of the noise with the user' S selection of the emphasized monitoring region, step S22 preferably employs a single gaussian background modeling method.
The single Gaussian background modeling is divided into two steps:
and A, estimating a background image. Acquiring a section of the acquired key monitoring area fixed background video, and establishing a video sequence B for the video0Estimation, i.e. obtaining the mean value mu of the luminance of each pixel0Sum varianceExpressed mathematically as follows:
wherein:
t is the time length corresponding to the video sequence, and (x, y) is the video pixel value, and an initial background model is established;
and B, updating the model according to the video frame of the background image input each time. And D, updating the model according to the initial background model obtained in the step A and the video frame input each time so as to adapt to the change of the environment. Expressed mathematically as follows:
wherein,
wherein, alpha is the update rate, K0Is a number between 0 and 1.
In step S23, an image frame of a region to be monitored is obtained from the image frames of the monitored scene.
Since the background model is only established for the important monitoring area in step S22, it is necessary to obtain the image of the important monitoring area from the image frame of the monitored scene for comparing the two.
In step S24, an area image having different pixels from those of the background model is found from the image frame of the highlight monitored area.
By comparing the image frames of the key areas with the background model one by one, the area images with different pixels from the background model can be found out.
In step S25, the area image with different pixels is determined as the foreground image of the moving object.
After step S25, the foreground image of the moving object may be filled in white, and other areas with the same pixels in the image frame of the emphasized area may be filled in black, so as to output the binary image for processing of the subsequent image.
In step S26, the foreground image of the moving object is morphologically filtered. The edges of the foreground image of the moving object determined in step S25 are usually not smooth, and holes are formed in the foreground image, and the edges of the foreground image can be smoothed by performing morphological filtering on the foreground image, so that the holes are eliminated.
And step S27, when the foreground image is detected to have the shadow, adopting a shadow elimination algorithm based on an HSV color model to remove the shadow in the foreground image.
In an actual monitoring scene, due to reasons such as weather or shielding between moving objects, illumination is not uniform, so that shadows are inevitably generated on a foreground image.
Currently, there are two general methods for removing shadows: one is to establish a shadow statistical model according to the shadow characteristics, and judge whether each pixel belongs to the shadow area according to the model; the other is a shadow elimination algorithm based on an HSV color model, which generally directly adopts the characteristics of the image, such as brightness, hue, saturation and other information to judge whether the image is a shadow. After a large number of observation experiments, the background pixel points are covered by the shadows, the changes of the chromaticity and the saturation are very small, the saturation is slightly reduced, and the influence of the target on the background brightness and the chromaticity is random and is related to the texture and the color of the target. Therefore, we can distinguish it from moving objects according to the characteristics of the shadow on luminance variations and chrominance variations. Because the shadows are various, the statistical model of the shadows is difficult to establish perfectly, in the actual operation process, engineers are often required to establish specific shadow statistical models for different monitoring scenes, and the method is inconvenient.
The shadow elimination algorithm based on the HSV color model comprises the following steps:
the method comprises the following steps: let f (x, y) be the value of the current motion region pixel and g (x, y) be the value of the background model pixel.
Step two: the RGB color model is converted into HSV. The algorithm for converting the RGB color model into the HSV color model is as follows:
max=max(R,G, B)
min=min(R,G, B)
ifR=max,H=(G-B)/(max-min)
if G=max,H=2+(B-R)/(max-min)
if B=max,H=4+(R-G)/(max-min)
H=H*60
If H<0,H=H+360
V=max(R,G, B)
S=(max–min)/max
the above conversion can be used to obtain the luminance value V (f (x, y)), the color value H (f (x, y)), the saturation value S (f (x, y)), and the luminance value V (g (x, y)), H (g (x, y)), S (g (x, y)) of the background model.
Step three: setting a threshold value U, calculating the difference | V (f (x, y)) -V (g (x, y)) | of the brightness values, if | V (f (x, y)) -V (g (x, y)) | < U, indicating that a point f (x, y) is a shadow point, removing the value of the pixel of the shadow point from the value of the pixel of the current motion area, and removing the shadow point.
After the shadow points are removed, a plurality of holes are found on the moving object, which is caused by the fact that the shadow appears on the moving object, and the moving object is also taken as the shadow to be eliminated. Therefore, it is necessary to restore the moving object by eliminating the shadow of the moving object itself.
Step four: and restoring the hole on the moving object, namely changing the pixels of the shadow on the moving object into white.
By adopting the four steps, the shadow can be roughly removed, and the complete binary image of the moving target is output.
According to the embodiment of the invention, the key monitoring area selected by the user from the monitoring scene is obtained, and the single Gaussian background model is established for the key monitoring area, so that the monitoring range is reduced, the interference of noise such as leaf shaking and the like can be avoided to a great extent, the data processing amount is reduced, and the cost is reduced.
The moving object detection method of the present invention is illustrated in detail in fig. 1 to 2, and the apparatus corresponding to the above method will be further described below.
Fig. 3 is a schematic flow chart of a moving object detection apparatus according to a first embodiment of the present invention. The movingobject detecting apparatus 100 includes: the obtainingmodule 110 is configured to obtain a key monitoring area selected by a user from a monitoring scene.
In general, a moving object always appears in a certain partial area in a monitored scene instead of all monitored areas, and therefore, the partial area can be classified as a key monitored area. The obtainingmodule 110 may obtain the key monitoring area selected by the user by prompting the user to identify the key monitoring area on the monitoring image of the monitoring scene by using a dashed box.
And themodel establishing module 120 is used for establishing a background model for the key monitoring area. The method for themodel building module 120 to build the background model has two main categories: one is that a monitoring image (namely a foreground image) without a moving target is shot as a background model, and the method is only suitable for indoor monitoring, because the background image of a monitoring scene in a real environment is constantly changed, such as illumination, wind blown leaves and the like, which can bring serious interference; the other type is a Gaussian background modeling method which adopts a continuously refreshed background model to adapt to the environmental change of a monitored scene. The gaussian background modeling method is further classified into a single gaussian background modeling method and a mixed gaussian background modeling method. The single Gaussian background modeling method is suitable for the condition that background pixels are not changed much, namely, noise is not much, and colors are concentrated; the mixed Gaussian background modeling method has strong robustness to dynamic changes of a monitored scene, and is suitable for target detection in a complex scene. Embodiments of the present invention employ a hybrid gaussian background modeling method to handle target detection in complex scenes.
And the extractingmodule 130 is configured to extract a foreground image of the moving object from the image frame of the monitored scene by referring to the background model.
According to the embodiment of the invention, the key monitoring area selected by the user from the monitoring scene is obtained, and the background model is established for the key monitoring area, so that the monitoring range is reduced, the interference of noise such as leaf shaking and the like can be avoided to a great extent, the data processing capacity is reduced, and the cost is reduced.
Fig. 4 is a schematic structural diagram of an embodiment of an extraction module according to the present invention. Theextraction module 130 includes:
a monitored areaimage obtaining unit 131, configured to obtain an image frame of a key monitored area from image frames of a monitored scene. Since themodel building module 120 only builds the background model for the important monitored area, it is necessary to obtain the image of the important monitored area from the image frame of the monitored scene for comparing the two.
And a searchingunit 132, configured to search, from the image frame of the important monitored area, an area image having different pixels from the background model. By comparing the image frames of the key areas with the background model one by one, the area images with different pixels from the background model can be found out.
A determiningunit 133, configured to determine the area image with different pixels as a foreground image of the moving object. Thedetermination unit 133, after determining the foreground image of the moving object, may fill the foreground image of the moving object into white and fill other regions having the same pixels in the image frame of the key region into black, thereby outputting a binarized image for processing of subsequent images.
Fig. 5 is a flowchart illustrating a moving object detecting device according to a second embodiment of the present invention. The movingobject detecting apparatus 100 includes:
the obtainingmodule 110 is configured to obtain a key monitoring area selected by a user from a monitoring scene, where in a general situation, a moving object always appears in a certain partial area but not all monitoring areas in the monitoring scene, and therefore, the partial area may be classified as the key monitoring area. The obtainingmodule 110 may obtain the key monitoring area selected by the user by prompting the user to identify the key monitoring area on the monitoring image of the monitoring scene by using a dashed box.
Themodel building module 120 is configured to build a background model for the key monitoring area by using a single gaussian background modeling method. Background model establishing methods are classified into two categories, one is that a monitoring image (namely a foreground image) without a moving target is shot as a background model, and the method is only suitable for indoor monitoring because the background image of a monitoring scene in a real environment is continuously changed, such as illumination, wind blown leaves and the like, and serious interference is caused; the other type is a Gaussian background modeling method which adopts a continuously refreshed background model to adapt to the environmental change of a monitored scene. The gaussian background modeling method is further classified into a single gaussian background modeling method and a mixed gaussian background modeling method. The single Gaussian background modeling method is suitable for the condition that background pixels are not changed much, namely, noise is not much, and colors are concentrated; the mixed Gaussian background modeling method has strong robustness to dynamic changes of a monitored scene, and is suitable for target detection in a complex scene. Since the obtainingmodule 110 has already removed a significant portion of the noise by the user's selection of the important monitored area, themodel building module 120 preferably uses a single gaussian background modeling method.
Themodel building module 120 using the single gaussian background modeling algorithm specifically includes:
a background image estimation unit 121 configured to estimate a background image: specifically, a video with a fixed background is acquired, and an estimation is made for the video sequence, that is, the mean value mu of the brightness of each pixel is obtained
0Sum variance
Expressed mathematically as follows:
wherein,
and T is the time length corresponding to the video sequence, and (x, y) is the video pixel value, and an initial background model is established. Thus, the establishment and initialization of the background model are completed.
A background image updating unit 122, configured to update the background image: after the initial background model is obtained, the model is updated according to the video frame input each time so as to adapt to the brightness change caused by the environment change. Expressed mathematically as follows:wherein,wherein, alpha is the update rate, K0Is a number between 0 and 1.
And the extractingmodule 130 is configured to extract a foreground image of the moving object from the image frame of the monitored scene by referring to the background model.
And thefiltering module 140 is configured to perform morphological filtering on the foreground image of the moving target. The edges of the foreground image of the moving object are usually not smooth, and holes are formed in the foreground image, so that morphological filtering is performed on the foreground image to smooth the edges and eliminate the holes.
And theshadow removing module 150 is configured to remove the shadow in the foreground image by using a shadow elimination algorithm based on the HSV color model when the shadow in the foreground image is detected. In an actual monitoring scene, due to reasons such as weather or shielding between moving objects, illumination is not uniform, so that shadows are inevitably generated on a foreground image. Currently, there are two general methods for removing shadows: one is to establish a shadow statistical model according to the shadow characteristics, and judge whether each pixel belongs to the shadow area according to the model; the other is a shadow elimination algorithm based on an HSV color model, which generally directly adopts the characteristics of the image, such as brightness, hue, saturation and other information to judge whether the image is a shadow. After a large number of observation experiments, the background pixel points are covered by the shadows, the changes of the chromaticity and the saturation are very small, the saturation is slightly reduced, and the influence of the target on the background brightness and the chromaticity is random and is related to the texture and the color of the target. Therefore, we can distinguish it from moving objects according to the characteristics of the shadow on luminance variations and chrominance variations. Because the shadows are various, the statistical model of the shadows is difficult to establish perfectly, in the actual operation process, engineers are often required to establish specific shadow statistical models for different monitoring scenes, and the method is inconvenient.
Theshadow removal module 150 using the shadow elimination algorithm based on the HSV color model specifically includes:
and a setting unit 151, configured to set f (x, y) as the value of the current motion region pixel, and g (x, y) as the value of the background model pixel.
A converting unit 152, configured to convert the RGB color model into an HSV color model. The algorithm for converting the RGB color model into the HSV color model is as follows:
max=max(R,G, B)
min=min(R,G, B)
ifR=max,H=(G-B)/(max-min)
if G=max,H=2+(B-R)/(max-min)
if B=max,H=4+(R-G)/(max-min)
H=H*60
If H<0,H=H+360
V=max(R,G, B)
S=(max–min)/max
the above conversion can be used to obtain the luminance value V (f (x, y)), the color value H (f (x, y)), the saturation value S (f (x, y)), and the luminance value V (g (x, y)), H (g (x, y)), S (g (x, y)) of the background model.
The calculation unit 153 sets a threshold U, calculates the difference | V (f (x, y)) -V (g (x, y)) | of the luminance values, and if | V (f (x, y)) -V (g (x, y)) | < U, it means that the f (x, y) point is a shadow point, and removes the value of the shadow point pixel from the values of the pixels in the current motion region.
After the shadow points are removed, a plurality of holes are found on the moving object, which is caused by the fact that the shadow appears on the moving object, and the moving object is also taken as the shadow to be eliminated. Therefore, it is necessary to restore the shadow of the moving object itself.
And a processing unit 154, configured to restore the shadow remaining on the moving object to the moving object, that is, to change the pixels of the shadow on the moving object to white, so as to output a complete binary image of the moving object.
According to the embodiment of the invention, the key monitoring area selected by the user from the monitoring scene is obtained, and the background model is established for the key monitoring area, so that the monitoring range is reduced, the interference of noise such as leaf shaking and the like can be avoided to a great extent, the data processing capacity is reduced, and the cost is reduced.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by using a computer program to instruct related hardware to perform the processes, and the processes can be stored in a computer readable storage medium, and when the processes are executed, the processes of the embodiments of the methods described above can be included. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.