Movatterモバイル変換


[0]ホーム

URL:


CN102982558A - Method and device for detecting moving target - Google Patents

Method and device for detecting moving target
Download PDF

Info

Publication number
CN102982558A
CN102982558ACN201210495320XACN201210495320ACN102982558ACN 102982558 ACN102982558 ACN 102982558ACN 201210495320X ACN201210495320X ACN 201210495320XACN 201210495320 ACN201210495320 ACN 201210495320ACN 102982558 ACN102982558 ACN 102982558A
Authority
CN
China
Prior art keywords
image
model
shadow
max
background model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201210495320XA
Other languages
Chinese (zh)
Inventor
高峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WUXI GANGWAN NETWORK TECHNOLOGY Co Ltd
Original Assignee
WUXI GANGWAN NETWORK TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WUXI GANGWAN NETWORK TECHNOLOGY Co LtdfiledCriticalWUXI GANGWAN NETWORK TECHNOLOGY Co Ltd
Priority to CN201210495320XApriorityCriticalpatent/CN102982558A/en
Publication of CN102982558ApublicationCriticalpatent/CN102982558A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Landscapes

Abstract

The invention relates to the technical field of image processing, in particular to a method and device for detecting a moving target. The method for detecting the moving target comprises the following steps: acquiring a key monitoring area selected by a user from a monitoring scene; and establishing a background model for the key monitoring area; and extracting a foreground image of the moving target from an image fame of the monitoring screen by referring to the background model so as to achieve the monitoring effect. By the method, interference factors besides the key monitoring area do not need to be considered, so that the algorithm is simple and the data processing quantity is small. The device for implementing the method is low in cost and high in running speed because the method for detecting the moving target is simple in algorithm and small in data processing quantity.

Description

Moving object detection method and device
Technical Field
The invention relates to the technical field of image processing, in particular to a moving target detection method and device.
Background
At present, due to the advantages of small artificial dependence, high safety, low leakage/false alarm rate and the like of intelligent video monitoring, the intelligent video monitoring plays an increasingly important role in the field of public security. The detection, tracking and identification of moving objects in video pictures are more critical technologies in intelligent video monitoring.
Background subtraction is the most widely used method for detecting moving objects today, and it uses background model establishment and continuous background model update to compare and differentiate the current frame with the background model, so as to obtain the foreground image of the moving object. In a practical monitoring environment, however, the monitoring scenario is often complex: such as leaf shaking, light change with indefinite brightness, inevitable noise interference (such as shadow), superposition and shielding of moving objects (such as crowded crowd), which all cause interference to the image extraction of the moving objects, so that the detection method of the moving objects has complex algorithm, large data processing capacity, high cost of the detection device of the moving objects and slow operation.
Disclosure of Invention
The embodiment of the invention provides a moving target detection method with simple algorithm and small data processing amount, and the device adopting the moving target detection method has low cost and high running speed.
The detection method of the moving target comprises the following steps: acquiring a key monitoring area selected by a user from a monitoring scene; establishing a background model for the key monitoring area; and extracting a foreground image of the moving target from the image frame of the monitoring scene by referring to the background model.
Wherein the step of establishing a background model for the key monitoring area comprises: establishing a background model for the key monitoring area by adopting a single Gaussian background modeling method, wherein the single Gaussian background modeling is divided into two steps:
step A, acquiring a section of the acquired fixed background video of the key monitoring area, and establishing a video sequence B for the video0Estimation, i.e. obtaining the mean value mu of the luminance of each pixel0Sum variance
Figure BDA00002484705200021
Figure BDA00002484705200022
μ0(x,y)=1TΣi=0T=1fi(x,y),δ02(x,y)=1TΣi=0T=1[fi(x,y)-μ0(x,y)]2,T is the time length corresponding to the video sequence, and (x, y) is the video pixel value, and an initial background model is established;
step B, updating the model according to the video frame input each time,
Figure BDA00002484705200025
Figure BDA00002484705200026
δt2=(1-α)δt-12+∂(ft-μt)2,∂=K012πδt-1exp{-(μt-1-ft)22},α is the update rate, K0Between 0 and 1.
The step of extracting a foreground image of a moving object from the image frames of the monitored scene with reference to the background model comprises: acquiring image frames of the key monitoring areas from the image frames of the monitoring scene; finding out an area image with different pixels from the background model from the image frame of the key monitoring area; and determining the area image with different pixels as a foreground image of the moving object.
Wherein, with reference to the background model, the step of extracting a foreground image of a moving object from the image frames of the monitored scene comprises: and performing morphological filtering on the foreground image of the moving target.
The step of extracting a foreground image of a moving object from the image frame of the monitored scene with reference to the background model further includes: when the foreground image is detected to have the shadow, eliminating the shadow in the foreground image by adopting a color model-HSV (Hue, Saturation and Value) color model shadow elimination algorithm based on Hue, Saturation and brightness; the HSV color model shadow elimination algorithm is divided into the following steps:
setting f (x, y) as the value of the current motion area pixel and g (x, y) as the value of the background model pixel;
step D, converting the RGB color model-red, green and blue color model into an HSV color model:
max=max(R,G, B)
min=min(R,G, B)
ifR=max,H=(G-B)/(max-min)
if G=max,H=2+(B-R)/(max-min)
if B=max,H=4+(R-G)/(max-min)
H=H*60
If H<0,H=H+360
V=max(R,G, B)
S=(max–min)/max
acquiring a brightness value V (f (x, y)), a color value H (f (x, y)), a saturation value S (f (x, y)), and a brightness value V (g (x, y)), H (g (x, y)), S (g (x, y)) of a background model of a current motion area;
E. setting a threshold value U, if | V (f (x, y)) -V (g (x, y)) | < U, defining the f (x, y) point at the moment as a shadow point, removing the value of the pixel of the shadow point from the values of the pixels in the current motion area, and removing the shadow point;
F. the pixels that set the shadow of the moving object become white.
Correspondingly, the invention also provides a device for realizing the moving target detection method, which comprises the following steps: the acquisition module is used for acquiring a key monitoring area selected by a user from a monitoring scene; the model establishing module is used for establishing a background model for the key monitoring area; and the extraction module is used for extracting a foreground image of the moving target from the image frame of the monitoring scene by referring to the background model.
The model building module builds a background model for the key monitoring area by adopting a single Gaussian background modeling method. The extraction module comprises: a monitoring area image obtaining unit, configured to obtain an image frame of the key monitoring area from image frames of the monitored scene; the searching unit is used for searching an area image which has different pixels from the background model from the image frame of the key monitoring area; and the determining unit is used for determining the area image with different pixels as a foreground image of the moving target.
The moving object detecting apparatus further includes: and the filtering module is used for performing morphological filtering on the foreground image of the moving target.
Wherein the moving object detecting device further includes: and the shadow removing module is used for removing the shadow in the foreground image by adopting a shadow elimination algorithm based on an HSV color model when the shadow in the foreground image is detected.
The invention has the beneficial effects that: the method for detecting the moving target comprises the steps of obtaining a key monitoring area selected by a user from a monitoring scene, establishing a background model for the key monitoring area, and extracting a foreground image of the moving target from an image frame of the monitoring scene by referring to the background model to realize a monitoring effect. The method does not need to consider interference factors outside a key monitoring area, so that the algorithm is simple and the data processing amount is small. The device for realizing the method has the advantages of low cost and high running speed due to simple algorithm and small data processing amount of the detection method of the moving target.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart illustrating a moving object detection method according to a first embodiment of the present invention.
Fig. 2 is a flowchart illustrating a moving object detecting method according to a second embodiment of the present invention.
Fig. 3 is a schematic structural diagram of a moving object detection apparatus according to a first embodiment of the present invention.
Fig. 4 is a schematic structural diagram of an embodiment of the extraction module of the present invention.
Fig. 5 is a schematic structural diagram of a moving object detecting apparatus according to a second embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart illustrating a moving object detection method according to a first embodiment of the present invention. The method comprises the following steps:
and step S11, acquiring the important monitoring area selected by the user from the monitoring scene.
In general, a moving object always appears in a certain partial area in a monitored scene instead of all monitored areas, and therefore, the partial area can be classified as a key monitored area. Step S11 is to obtain the key monitoring area selected by the user by prompting the user to identify the key monitoring area on the monitoring image of the monitoring scene in a dashed box.
And step S12, establishing a background model for the important monitoring area.
Background model building methods fall into two broad categories: one is that a monitoring image (namely a foreground image) without a moving target is shot as a background model, and the method is only suitable for indoor monitoring, because the background image of a monitoring scene in a real environment is constantly changed, such as illumination, wind blown leaves and the like, which can bring serious interference; the other type is a Gaussian background modeling method which adopts a continuously refreshed background model to adapt to the environmental change of a monitored scene. The gaussian background modeling method is further classified into a single gaussian background modeling method and a mixed gaussian background modeling method. The single Gaussian background modeling method is suitable for the condition that background pixels are not changed much, namely, noise is not much, and colors are concentrated; the mixed Gaussian background modeling method has strong robustness to dynamic changes of a monitored scene, and is suitable for target detection in a complex scene.
In step S13, a foreground image of the moving object is extracted from the image frame of the monitored scene with reference to the background model.
According to the embodiment of the invention, the key monitoring area selected by the user from the monitoring scene is obtained, so that the monitoring range is greatly reduced; a background model is established for a key monitoring area, but not all monitoring areas, so that interference factors such as wind, rain, snow, leaf shaking and the like are relatively determined. The embodiment of the invention has simple algorithm and small data processing amount.
Fig. 2 is a flowchart illustrating a moving object detecting method according to a second embodiment of the present invention. The method comprises the following steps:
and step S21, acquiring the important monitoring area selected by the user from the monitoring scene.
In general, a moving object always appears in a certain partial area in a monitored scene instead of all monitored areas, and therefore, the partial area can be classified as a key monitored area. Step S21 is to obtain the key monitoring area selected by the user by prompting the user to identify the key monitoring area on the monitoring image of the monitoring scene in a dashed box.
And step S22, establishing a background model for the key monitoring area by adopting a single Gaussian background modeling method.
Background model establishing methods are classified into two categories, one is that a monitoring image (namely a foreground image) without a moving target is shot as a background model, and the method is only suitable for indoor monitoring because the background image of a monitoring scene in a real environment is continuously changed, such as illumination, wind blown leaves and the like, and serious interference is caused; the other type is a Gaussian background modeling method which adopts a continuously refreshed background model to adapt to the environmental change of a monitored scene. The gaussian background modeling method is further classified into a single gaussian background modeling method and a mixed gaussian background modeling method. The single Gaussian background modeling method is suitable for the condition that background pixels are not changed much, namely, noise is not much, and colors are concentrated; the mixed Gaussian background modeling method has strong robustness to dynamic changes of a monitored scene, and is suitable for target detection in a complex scene. Since step S21 has excluded a significant portion of the noise with the user' S selection of the emphasized monitoring region, step S22 preferably employs a single gaussian background modeling method.
The single Gaussian background modeling is divided into two steps:
and A, estimating a background image. Acquiring a section of the acquired key monitoring area fixed background video, and establishing a video sequence B for the video0Estimation, i.e. obtaining the mean value mu of the luminance of each pixel0Sum varianceExpressed mathematically as follows:
B0=[&mu;0,&delta;02]
wherein:
&mu;0(x,y)=1T&Sigma;i=0T=1fi(x,y)
&delta;02(x,y)=1T&Sigma;i=0T=1[fi(x,y)-&mu;0(x,y)]2
t is the time length corresponding to the video sequence, and (x, y) is the video pixel value, and an initial background model is established;
and B, updating the model according to the video frame of the background image input each time. And D, updating the model according to the initial background model obtained in the step A and the video frame input each time so as to adapt to the change of the environment. Expressed mathematically as follows:
Figure BDA00002484705200072
wherein,
&mu;t=(1-&alpha;)&mu;t-1+&PartialD;ft
&delta;t2=(1-&alpha;)&delta;t-12+&PartialD;(ft-&mu;t)2
&PartialD;=K012&pi;&delta;t-1exp{-(&mu;t-1-ft)22}
wherein, alpha is the update rate, K0Is a number between 0 and 1.
In step S23, an image frame of a region to be monitored is obtained from the image frames of the monitored scene.
Since the background model is only established for the important monitoring area in step S22, it is necessary to obtain the image of the important monitoring area from the image frame of the monitored scene for comparing the two.
In step S24, an area image having different pixels from those of the background model is found from the image frame of the highlight monitored area.
By comparing the image frames of the key areas with the background model one by one, the area images with different pixels from the background model can be found out.
In step S25, the area image with different pixels is determined as the foreground image of the moving object.
After step S25, the foreground image of the moving object may be filled in white, and other areas with the same pixels in the image frame of the emphasized area may be filled in black, so as to output the binary image for processing of the subsequent image.
In step S26, the foreground image of the moving object is morphologically filtered. The edges of the foreground image of the moving object determined in step S25 are usually not smooth, and holes are formed in the foreground image, and the edges of the foreground image can be smoothed by performing morphological filtering on the foreground image, so that the holes are eliminated.
And step S27, when the foreground image is detected to have the shadow, adopting a shadow elimination algorithm based on an HSV color model to remove the shadow in the foreground image.
In an actual monitoring scene, due to reasons such as weather or shielding between moving objects, illumination is not uniform, so that shadows are inevitably generated on a foreground image.
Currently, there are two general methods for removing shadows: one is to establish a shadow statistical model according to the shadow characteristics, and judge whether each pixel belongs to the shadow area according to the model; the other is a shadow elimination algorithm based on an HSV color model, which generally directly adopts the characteristics of the image, such as brightness, hue, saturation and other information to judge whether the image is a shadow. After a large number of observation experiments, the background pixel points are covered by the shadows, the changes of the chromaticity and the saturation are very small, the saturation is slightly reduced, and the influence of the target on the background brightness and the chromaticity is random and is related to the texture and the color of the target. Therefore, we can distinguish it from moving objects according to the characteristics of the shadow on luminance variations and chrominance variations. Because the shadows are various, the statistical model of the shadows is difficult to establish perfectly, in the actual operation process, engineers are often required to establish specific shadow statistical models for different monitoring scenes, and the method is inconvenient.
The shadow elimination algorithm based on the HSV color model comprises the following steps:
the method comprises the following steps: let f (x, y) be the value of the current motion region pixel and g (x, y) be the value of the background model pixel.
Step two: the RGB color model is converted into HSV. The algorithm for converting the RGB color model into the HSV color model is as follows:
max=max(R,G, B)
min=min(R,G, B)
ifR=max,H=(G-B)/(max-min)
if G=max,H=2+(B-R)/(max-min)
if B=max,H=4+(R-G)/(max-min)
H=H*60
If H<0,H=H+360
V=max(R,G, B)
S=(max–min)/max
the above conversion can be used to obtain the luminance value V (f (x, y)), the color value H (f (x, y)), the saturation value S (f (x, y)), and the luminance value V (g (x, y)), H (g (x, y)), S (g (x, y)) of the background model.
Step three: setting a threshold value U, calculating the difference | V (f (x, y)) -V (g (x, y)) | of the brightness values, if | V (f (x, y)) -V (g (x, y)) | < U, indicating that a point f (x, y) is a shadow point, removing the value of the pixel of the shadow point from the value of the pixel of the current motion area, and removing the shadow point.
After the shadow points are removed, a plurality of holes are found on the moving object, which is caused by the fact that the shadow appears on the moving object, and the moving object is also taken as the shadow to be eliminated. Therefore, it is necessary to restore the moving object by eliminating the shadow of the moving object itself.
Step four: and restoring the hole on the moving object, namely changing the pixels of the shadow on the moving object into white.
By adopting the four steps, the shadow can be roughly removed, and the complete binary image of the moving target is output.
According to the embodiment of the invention, the key monitoring area selected by the user from the monitoring scene is obtained, and the single Gaussian background model is established for the key monitoring area, so that the monitoring range is reduced, the interference of noise such as leaf shaking and the like can be avoided to a great extent, the data processing amount is reduced, and the cost is reduced.
The moving object detection method of the present invention is illustrated in detail in fig. 1 to 2, and the apparatus corresponding to the above method will be further described below.
Fig. 3 is a schematic flow chart of a moving object detection apparatus according to a first embodiment of the present invention. The movingobject detecting apparatus 100 includes: the obtainingmodule 110 is configured to obtain a key monitoring area selected by a user from a monitoring scene.
In general, a moving object always appears in a certain partial area in a monitored scene instead of all monitored areas, and therefore, the partial area can be classified as a key monitored area. The obtainingmodule 110 may obtain the key monitoring area selected by the user by prompting the user to identify the key monitoring area on the monitoring image of the monitoring scene by using a dashed box.
And themodel establishing module 120 is used for establishing a background model for the key monitoring area. The method for themodel building module 120 to build the background model has two main categories: one is that a monitoring image (namely a foreground image) without a moving target is shot as a background model, and the method is only suitable for indoor monitoring, because the background image of a monitoring scene in a real environment is constantly changed, such as illumination, wind blown leaves and the like, which can bring serious interference; the other type is a Gaussian background modeling method which adopts a continuously refreshed background model to adapt to the environmental change of a monitored scene. The gaussian background modeling method is further classified into a single gaussian background modeling method and a mixed gaussian background modeling method. The single Gaussian background modeling method is suitable for the condition that background pixels are not changed much, namely, noise is not much, and colors are concentrated; the mixed Gaussian background modeling method has strong robustness to dynamic changes of a monitored scene, and is suitable for target detection in a complex scene. Embodiments of the present invention employ a hybrid gaussian background modeling method to handle target detection in complex scenes.
And the extractingmodule 130 is configured to extract a foreground image of the moving object from the image frame of the monitored scene by referring to the background model.
According to the embodiment of the invention, the key monitoring area selected by the user from the monitoring scene is obtained, and the background model is established for the key monitoring area, so that the monitoring range is reduced, the interference of noise such as leaf shaking and the like can be avoided to a great extent, the data processing capacity is reduced, and the cost is reduced.
Fig. 4 is a schematic structural diagram of an embodiment of an extraction module according to the present invention. Theextraction module 130 includes:
a monitored areaimage obtaining unit 131, configured to obtain an image frame of a key monitored area from image frames of a monitored scene. Since themodel building module 120 only builds the background model for the important monitored area, it is necessary to obtain the image of the important monitored area from the image frame of the monitored scene for comparing the two.
And a searchingunit 132, configured to search, from the image frame of the important monitored area, an area image having different pixels from the background model. By comparing the image frames of the key areas with the background model one by one, the area images with different pixels from the background model can be found out.
A determiningunit 133, configured to determine the area image with different pixels as a foreground image of the moving object. Thedetermination unit 133, after determining the foreground image of the moving object, may fill the foreground image of the moving object into white and fill other regions having the same pixels in the image frame of the key region into black, thereby outputting a binarized image for processing of subsequent images.
Fig. 5 is a flowchart illustrating a moving object detecting device according to a second embodiment of the present invention. The movingobject detecting apparatus 100 includes:
the obtainingmodule 110 is configured to obtain a key monitoring area selected by a user from a monitoring scene, where in a general situation, a moving object always appears in a certain partial area but not all monitoring areas in the monitoring scene, and therefore, the partial area may be classified as the key monitoring area. The obtainingmodule 110 may obtain the key monitoring area selected by the user by prompting the user to identify the key monitoring area on the monitoring image of the monitoring scene by using a dashed box.
Themodel building module 120 is configured to build a background model for the key monitoring area by using a single gaussian background modeling method. Background model establishing methods are classified into two categories, one is that a monitoring image (namely a foreground image) without a moving target is shot as a background model, and the method is only suitable for indoor monitoring because the background image of a monitoring scene in a real environment is continuously changed, such as illumination, wind blown leaves and the like, and serious interference is caused; the other type is a Gaussian background modeling method which adopts a continuously refreshed background model to adapt to the environmental change of a monitored scene. The gaussian background modeling method is further classified into a single gaussian background modeling method and a mixed gaussian background modeling method. The single Gaussian background modeling method is suitable for the condition that background pixels are not changed much, namely, noise is not much, and colors are concentrated; the mixed Gaussian background modeling method has strong robustness to dynamic changes of a monitored scene, and is suitable for target detection in a complex scene. Since the obtainingmodule 110 has already removed a significant portion of the noise by the user's selection of the important monitored area, themodel building module 120 preferably uses a single gaussian background modeling method.
Themodel building module 120 using the single gaussian background modeling algorithm specifically includes:
a background image estimation unit 121 configured to estimate a background image: specifically, a video with a fixed background is acquired, and an estimation is made for the video sequence, that is, the mean value mu of the brightness of each pixel is obtained0Sum variance
Figure BDA00002484705200121
Expressed mathematically as follows:wherein,
Figure BDA00002484705200123
Figure BDA00002484705200124
and T is the time length corresponding to the video sequence, and (x, y) is the video pixel value, and an initial background model is established. Thus, the establishment and initialization of the background model are completed.
A background image updating unit 122, configured to update the background image: after the initial background model is obtained, the model is updated according to the video frame input each time so as to adapt to the brightness change caused by the environment change. Expressed mathematically as follows:Bt=[&mu;t,&delta;t2].wherein,&mu;t=(1-&alpha;)&mu;t-1+&PartialD;ft,&delta;t2=(1-&alpha;)&delta;t-12+&PartialD;(ft-&mu;t)2,wherein, alpha is the update rate, K0Is a number between 0 and 1.
And the extractingmodule 130 is configured to extract a foreground image of the moving object from the image frame of the monitored scene by referring to the background model.
And thefiltering module 140 is configured to perform morphological filtering on the foreground image of the moving target. The edges of the foreground image of the moving object are usually not smooth, and holes are formed in the foreground image, so that morphological filtering is performed on the foreground image to smooth the edges and eliminate the holes.
And theshadow removing module 150 is configured to remove the shadow in the foreground image by using a shadow elimination algorithm based on the HSV color model when the shadow in the foreground image is detected. In an actual monitoring scene, due to reasons such as weather or shielding between moving objects, illumination is not uniform, so that shadows are inevitably generated on a foreground image. Currently, there are two general methods for removing shadows: one is to establish a shadow statistical model according to the shadow characteristics, and judge whether each pixel belongs to the shadow area according to the model; the other is a shadow elimination algorithm based on an HSV color model, which generally directly adopts the characteristics of the image, such as brightness, hue, saturation and other information to judge whether the image is a shadow. After a large number of observation experiments, the background pixel points are covered by the shadows, the changes of the chromaticity and the saturation are very small, the saturation is slightly reduced, and the influence of the target on the background brightness and the chromaticity is random and is related to the texture and the color of the target. Therefore, we can distinguish it from moving objects according to the characteristics of the shadow on luminance variations and chrominance variations. Because the shadows are various, the statistical model of the shadows is difficult to establish perfectly, in the actual operation process, engineers are often required to establish specific shadow statistical models for different monitoring scenes, and the method is inconvenient.
Theshadow removal module 150 using the shadow elimination algorithm based on the HSV color model specifically includes:
and a setting unit 151, configured to set f (x, y) as the value of the current motion region pixel, and g (x, y) as the value of the background model pixel.
A converting unit 152, configured to convert the RGB color model into an HSV color model. The algorithm for converting the RGB color model into the HSV color model is as follows:
max=max(R,G, B)
min=min(R,G, B)
ifR=max,H=(G-B)/(max-min)
if G=max,H=2+(B-R)/(max-min)
if B=max,H=4+(R-G)/(max-min)
H=H*60
If H<0,H=H+360
V=max(R,G, B)
S=(max–min)/max
the above conversion can be used to obtain the luminance value V (f (x, y)), the color value H (f (x, y)), the saturation value S (f (x, y)), and the luminance value V (g (x, y)), H (g (x, y)), S (g (x, y)) of the background model.
The calculation unit 153 sets a threshold U, calculates the difference | V (f (x, y)) -V (g (x, y)) | of the luminance values, and if | V (f (x, y)) -V (g (x, y)) | < U, it means that the f (x, y) point is a shadow point, and removes the value of the shadow point pixel from the values of the pixels in the current motion region.
After the shadow points are removed, a plurality of holes are found on the moving object, which is caused by the fact that the shadow appears on the moving object, and the moving object is also taken as the shadow to be eliminated. Therefore, it is necessary to restore the shadow of the moving object itself.
And a processing unit 154, configured to restore the shadow remaining on the moving object to the moving object, that is, to change the pixels of the shadow on the moving object to white, so as to output a complete binary image of the moving object.
According to the embodiment of the invention, the key monitoring area selected by the user from the monitoring scene is obtained, and the background model is established for the key monitoring area, so that the monitoring range is reduced, the interference of noise such as leaf shaking and the like can be avoided to a great extent, the data processing capacity is reduced, and the cost is reduced.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by using a computer program to instruct related hardware to perform the processes, and the processes can be stored in a computer readable storage medium, and when the processes are executed, the processes of the embodiments of the methods described above can be included. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A moving object detection method, comprising:
acquiring a key monitoring area selected by a user from a monitoring scene;
establishing a background model for the key monitoring area;
and extracting a foreground image of the moving target from the image frame of the monitoring scene by referring to the background model.
2. The method of claim 1, wherein the step of establishing a background model for the key monitoring area comprises:
establishing a background model for the key monitoring area by adopting a single Gaussian background modeling method; the single Gaussian background modeling is divided into two steps:
step A, acquiring a section of the acquired fixed background video of the key monitoring area, and establishing a video sequence B for the video0Estimation, i.e. obtaining the mean value mu of the luminance of each pixel0Sum variance
Figure FDA00002484705100011
Figure FDA00002484705100012
&mu;0(x,y)=1T&Sigma;i=0T=1fi(x,y),&delta;02(x,y)=1T&Sigma;i=0T=1[fi(x,y)-&mu;0(x,y)]2,T is the time length corresponding to the video sequence, and (x, y) is the video pixel value, and an initial background model is established;
step B, updating the model according to the video frame input each time,
Figure FDA00002484705100016
&delta;t2=(1-&alpha;)&delta;t-12+&PartialD;(ft-&mu;t)2,&PartialD;=K012&pi;&delta;t-1exp{-(&mu;t-1-ft)22},α is the update rate, K0Between 0 and 1.
3. The method of claim 1, wherein said step of extracting a foreground image of a moving object from an image frame of the monitored scene with reference to the background model comprises:
acquiring image frames of the key monitoring areas from the image frames of the monitoring scene;
finding out an area image with different pixels from the background model from the image frame of the key monitoring area;
and determining the area image with different pixels as a foreground image of the moving object.
4. The method of claim 1, wherein said step of extracting a foreground image of a moving object from an image frame of the monitored scene with reference to the background model is followed by:
and performing morphological filtering on the foreground image of the moving target.
5. The method of claim 1, wherein said step of extracting a foreground image of a moving object from an image frame of the monitored scene with reference to the background model further comprises:
when the foreground image is detected to have the shadow, eliminating the shadow in the foreground image by adopting a color model-HSV (Hue, Saturation and Value) color model shadow elimination algorithm based on Hue, Saturation and brightness; the HSV color model shadow elimination algorithm is divided into the following steps:
setting f (x, y) as the value of the current motion area pixel and g (x, y) as the value of the background model pixel;
step D, converting the RGB color model-red, green and blue color model into an HSV color model:
max=max(R,G, B)
min=min(R,G, B)
ifR=max,H=(G-B)/(max-min)
if G=max,H=2+(B-R)/(max-min)
if B=max,H=4+(R-G)/(max-min)
H=H*60
If H<0,H=H+360
V=max(R,G, B)
S=(max-min)/max
acquiring a brightness value V (f (x, y)), a color value H (f (x, y)), a saturation value S (f (x, y)), and a brightness value V (g (x, y)), H (g (x, y)), S (g (x, y)) of a background model of a current motion area;
E. setting a threshold value U, if | V (f (x, y)) -V (g (x, y)) | < U, defining the f (x, y) point at the moment as a shadow point, and removing the value of the shadow point pixel from the values of the pixels in the current motion area;
F. the pixels that set the shadow of the moving object become white.
6. A moving object detecting apparatus, comprising:
the acquisition module is used for acquiring a key monitoring area selected by a user from a monitoring scene;
the model establishing module is used for establishing a background model for the key monitoring area;
and the extraction module is used for extracting a foreground image of the moving target from the image frame of the monitoring scene by referring to the background model.
7. The apparatus of claim 6, wherein the model building module builds a background model for the key monitoring area using single Gaussian background modeling.
8. The apparatus of claim 6, wherein the extraction module comprises:
a monitoring area image obtaining unit, configured to obtain an image frame of the key monitoring area from image frames of the monitored scene;
the searching unit is used for searching an area image which has different pixels from the background model from the image frame of the key monitoring area;
and the determining unit is used for determining the area image with different pixels as a foreground image of the moving target.
9. The apparatus of claim 6, further comprising:
and the filtering module is used for performing morphological filtering on the foreground image of the moving target.
10. The apparatus of claim 6, further comprising:
and the shadow removing module is used for removing the shadow in the foreground image by adopting a shadow elimination algorithm based on an HSV color model when the shadow in the foreground image is detected.
CN201210495320XA2012-11-282012-11-28Method and device for detecting moving targetPendingCN102982558A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201210495320XACN102982558A (en)2012-11-282012-11-28Method and device for detecting moving target

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201210495320XACN102982558A (en)2012-11-282012-11-28Method and device for detecting moving target

Publications (1)

Publication NumberPublication Date
CN102982558Atrue CN102982558A (en)2013-03-20

Family

ID=47856499

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201210495320XAPendingCN102982558A (en)2012-11-282012-11-28Method and device for detecting moving target

Country Status (1)

CountryLink
CN (1)CN102982558A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109214293A (en)*2018-08-072019-01-15电子科技大学A kind of oil field operation region personnel wearing behavioral value method and system
CN110209063A (en)*2019-05-232019-09-06成都世纪光合作用科技有限公司A kind of smart machine control method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101246547A (en)*2008-03-032008-08-20北京航空航天大学 A Method for Detecting Moving Objects in Video Based on Scene Change Features
CN101996410A (en)*2010-12-072011-03-30北京交通大学Method and system of detecting moving object under dynamic background
CN102073863A (en)*2010-11-242011-05-25中国科学院半导体研究所Method for acquiring characteristic size of remote video monitored target on basis of depth fingerprint

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101246547A (en)*2008-03-032008-08-20北京航空航天大学 A Method for Detecting Moving Objects in Video Based on Scene Change Features
CN102073863A (en)*2010-11-242011-05-25中国科学院半导体研究所Method for acquiring characteristic size of remote video monitored target on basis of depth fingerprint
CN101996410A (en)*2010-12-072011-03-30北京交通大学Method and system of detecting moving object under dynamic background

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘泉志: "基于视频的运动目标检测与跟踪算法研究", 《中国优秀硕士学位论文全文数据库》*
宋杨: "基于高斯混合模型的运动目标检测算法研究", 《中国优秀硕士学位论文全文数据》*
陈瑜: "智能视频监控系统中运动目标检测与跟踪算法的研究", 《中国优秀硕士学位论文全文数据库》*

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109214293A (en)*2018-08-072019-01-15电子科技大学A kind of oil field operation region personnel wearing behavioral value method and system
CN110209063A (en)*2019-05-232019-09-06成都世纪光合作用科技有限公司A kind of smart machine control method and device

Similar Documents

PublicationPublication DateTitle
CN104392468B (en) Moving Object Detection Method Based on Improved Visual Background Extraction
CN112036254A (en)Moving vehicle foreground detection method based on video image
US20200250840A1 (en)Shadow detection method and system for surveillance video image, and shadow removing method
Patel et al.Flame detection using image processing techniques
CN104599256B (en)The method and system of removal image rain line based on single image
CN104504856A (en)Fatigue driving detection method based on Kinect and face recognition
CN103942812B (en)Moving object detection method based on Gaussian mixture and edge detection
CN106815587B (en) Image processing method and device
CN113221763B (en) A flame recognition method based on video image brightness
Sakpal et al.Adaptive background subtraction in images
CN104933728A (en)Mixed motion target detection method
CN105046653A (en)Method and system for removing raindrops in videos
Kar et al.Moving cast shadow detection and removal from Video based on HSV color space
TW201032180A (en)Method and device for keeping image background by multiple gauss models
CN104299234B (en)The method and system that rain field removes in video data
CN105335981B (en)A kind of cargo monitoring method based on image
CN110717946A (en)Method for screening flame target from video image
CN112926676B (en)False target identification method and device and computer equipment
CN102982558A (en)Method and device for detecting moving target
CN107239761A (en)Fruit tree branch pulling effect evaluation method based on skeleton Corner Detection
CN104992420A (en)Video raindrop removing method
CN110580706A (en)Method and device for extracting video background model
CN103226813B (en) A processing method for improving video image quality in rainy weather
CN114037725A (en)Method and equipment for detecting fire
KR101337833B1 (en)Method for estimating response of audience concerning content

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
C12Rejection of a patent application after its publication
RJ01Rejection of invention patent application after publication

Application publication date:20130320


[8]ページ先頭

©2009-2025 Movatter.jp