Movatterモバイル変換


[0]ホーム

URL:


CN113496500A - Out-of-range detection method and device and storage medium - Google Patents

Out-of-range detection method and device and storage medium
Download PDF

Info

Publication number
CN113496500A
CN113496500ACN202010252853.XACN202010252853ACN113496500ACN 113496500 ACN113496500 ACN 113496500ACN 202010252853 ACN202010252853 ACN 202010252853ACN 113496500 ACN113496500 ACN 113496500A
Authority
CN
China
Prior art keywords
detection object
video image
current video
detection
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010252853.XA
Other languages
Chinese (zh)
Other versions
CN113496500B (en
Inventor
王晟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Ezviz Network Co Ltd
Original Assignee
Hangzhou Ezviz Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Ezviz Network Co LtdfiledCriticalHangzhou Ezviz Network Co Ltd
Priority to CN202010252853.XApriorityCriticalpatent/CN113496500B/en
Publication of CN113496500ApublicationCriticalpatent/CN113496500A/en
Application grantedgrantedCritical
Publication of CN113496500BpublicationCriticalpatent/CN113496500B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The invention provides a border crossing detection method, a device and a storage medium, wherein the method comprises the following steps: acquiring a current video image and characteristic information of a detection object in the current video image, wherein the characteristic information comprises position information; according to the characteristic information of the detection object, finding out a frame of video image containing the detection object from the previous N frames of video images of the current video image; determining a motion vector corresponding to the detection object based on the position information of the detection object in the found frame of video image and the position information of the detection object in the current video image; and determining whether the detected object is out of range according to the motion vector corresponding to the detected object and a boundary line vector preset in the current video image. The invention can reduce resource consumption and improve the accuracy of the border crossing detection result.

Description

Out-of-range detection method and device and storage medium
Technical Field
The invention relates to the technical field of computer intelligence, in particular to a border crossing detection method, a border crossing detection device and a storage medium.
Background
The conventional border crossing detection method is realized by the following steps: and tracking the moving target by using a tracking algorithm to obtain the position information of the moving target, and further judging the boundary crossing/invasion of the moving target according to the position information of the moving target.
The border-crossing detection method needs the support of a tracking algorithm, and the tracking algorithm has high resource consumption and is easy to lose effectiveness when an object is shielded, so that the border-crossing detection method has high resource consumption and high error rate of a detection result.
Disclosure of Invention
In view of the above, the present invention provides a method, an apparatus and a storage medium for detecting a boundary crossing, which can reduce resource consumption and improve accuracy of a boundary crossing detection result.
In order to achieve the purpose, the invention provides the following technical scheme:
an out-of-range detection method, comprising:
acquiring a current video image and characteristic information of a detection object in the current video image, wherein the characteristic information comprises position information;
according to the characteristic information of the detection object, finding out a frame of video image containing the detection object from the previous N frames of video images of the current video image; wherein N is a preset value;
determining a motion vector corresponding to the detection object based on the position information of the detection object in the found frame of video image and the position information of the detection object in the current video image;
and determining whether the detected object is out of range according to the motion vector corresponding to the detected object and a boundary line vector preset in the current video image.
An out-of-range detection apparatus comprising a non-transitory computer readable storage medium, and a processor connected to the non-transitory computer readable storage medium by a bus;
the non-transitory computer readable storage medium storing a computer program executable on the processor, the processor implementing the following steps when executing the program:
acquiring a current video image and characteristic information of a detection object in the current video image, wherein the characteristic information comprises position information;
according to the characteristic information of the detection object, finding out a frame of video image containing the detection object from the previous N frames of video images of the current video image; wherein N is a preset value;
determining a motion vector corresponding to the detection object based on the position information of the detection object in the found frame of video image and the position information of the detection object in the current video image;
and determining whether the detected object is out of range according to the motion vector corresponding to the detected object and a boundary line vector preset in the current video image.
A non-transitory computer readable storage medium storing instructions that, when executed by a processor, cause the processor to perform the steps in the out-of-bounds detection method described above.
According to the technical scheme, after the characteristic information of the current video image and the detection object in the current video image is acquired, one frame of video image including the detection object is found from the previous N frames of video images of the current video image, the motion vector of the detection object is determined according to the position of the detection object in the video image and the position of the detection object in the current video image, and whether the detection object is out of range or not is judged based on the motion vector and a pre-configured out-of-range vector. In the invention, in the process of determining whether the detection object is out of range, a tracking algorithm is not needed, the resource consumption is low, the detection object is only needed to be found in the previous N frames of video images and whether the detection object is out of range can be determined on the basis of the detection object, the problem of tracking failure caused by the fact that the detection image is hidden in partial video images can be effectively reduced, and the accuracy of the out-of-range detection result can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
FIG. 1 is a flowchart of a cross-border detection method according to an embodiment of the present invention;
FIG. 2 is a flowchart of a second out-of-range detection method according to an embodiment of the present invention;
FIG. 3 is a flowchart of a third out-of-bounds detection method according to an embodiment of the present invention;
FIG. 4 is a flowchart of a fourth out-of-bounds detection method according to an embodiment of the invention;
fig. 5 is a schematic structural diagram of an out-of-range detection apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the embodiment of the invention, on the basis of the identification result of the video image (part or all of the characteristic information of the detection object in the video image is identified), the detection object in the video image is subjected to border crossing detection, and the main processes are as follows: and finding the video image comprising the detection object in the first N frames of video images of the current video image, and then carrying out subsequent boundary crossing judgment according to the characteristic information of the detection object in the current video image and the found video image.
The following detailed description of the embodiments of the present invention is given by reference to the following specific examples:
the first embodiment,
Referring to fig. 1, fig. 1 is a flowchart of a cross-border detection method according to an embodiment of the present invention, as shown in fig. 1, the method mainly includes the following steps:
step 101, obtaining a current video image and feature information of a detection object in the current video image.
In the embodiment of the present invention, the feature information includes position information.
And 102, finding out a frame of video image containing the detection object from the previous N frames of video images of the current video image according to the characteristic information of the detection object.
Here, N is a preset value, for example, 5.
Step 103, determining a motion vector corresponding to the detection object based on the position information of the detection object in the found frame of video image and the position information of the detection object in the current video image.
And 104, determining whether the detected object is out of range according to the motion vector corresponding to the detected object and a boundary line vector preset in the current video image.
In general, the position of the camera is fixed, the shooting range is also fixed, if a certain area in the shooting range is prohibited from entering or exiting due to some reasons, each edge of the area corresponds to an out-of-range line in the video image, and after a direction is set for the out-of-range line, the direction is the boundary line vector preset in the current video image in the embodiment of the present invention, and actually, each frame of the video image shot by the camera has the out-of-range vector.
It can be seen from the embodiment of the present invention shown in fig. 1 that, in the present invention, for each detection object in the current video image, one frame of video image containing the detection object is found in the previous N frames of video images, the motion vector of the detection object is determined according to the position information of the detection object in the current video image and the found one frame of video image, and then the detection object is subjected to boundary crossing judgment according to the motion vector of the detection object and the preset boundary crossing vector.
Example II,
Referring to fig. 2, fig. 2 is a flowchart of a second out-of-range detection method according to an embodiment of the present invention, as shown in fig. 2, the method mainly includes the following steps:
step 2011, receiving the current video image and the position information and the size information of the detection object in the current video image.
The position information and the size information of the detected object are one kind of spatial feature, and belong to the feature information of the detected object. The feature information of the detection target also includes an image feature of the detection target.
Step 2012, extracting image features of the detection object from the current video image based on the position information and the size information of the detection object in the current video image.
Steps 2011 to 2012 which are specific to thestep 101 in the embodiment of the present invention shown in fig. 1, wherein the position information, the size information and the image feature of the above detection object all belong to the feature information of the detection object.
In the embodiment of the present invention, the image feature of the detection object may be a gray histogram feature, an RGB histogram feature, or any feature capable of reflecting image information of the detection object.
In an embodiment of the present invention, the position information of the detection object may include position coordinates of the detection object, the size information of the detection object may include a width and a height, and the frame corresponding to the detection object in the current video image may be determined according to the position coordinates and the size information of the detection object, for example, when the position coordinates are a central point of the frame corresponding to the detection object, the four-corner coordinates of the frame corresponding to the detection object may be determined according to the size information of the detection object, and the frame corresponding to the detection object may be determined accordingly, and after the frame corresponding to the detection object is determined, the image information in the frame may be extracted to obtain the image feature of the detection object.
Step 202, according to the feature information of the detection object, finding out a frame of video image containing the detection object from the previous N frames of video images of the current video image.
Here, N is a preset value, for example, 5.
In practical application, feature information of two frames of video images of the same object close to each other at shooting time should be the same or relatively close to each other, and based on this feature, in the embodiment of the present invention, a frame of video image including the detection object is found from the previous N frames of video images of the current video image according to the feature information of the detection object, and the specific implementation method is as follows:
according to the sequence of shooting time of the first N frames of video images of the current video image from back to front, sequentially searching a detection object y with characteristic information matched with the characteristic information of the detection object in each frame of video image, determining the video image of the searched detection object y as a frame of video image containing the detection object until the detection object y is found, or determining that the N frames of video images are found and the detection object y is not found, and ending the search.
Here, it should be noted that the search for the detection object y may also be performed according to other sorting orders of the first N frames of video images of the current video image, and the above specific implementation method is only one preferred embodiment.
The following exemplifies the specific implementation method:
assuming that N is 3, a process of finding out a frame of video image including the detection object from the previous N frames of video images of the current video image according to the feature information of the detection object is as follows:
s1, searching a detection object y with characteristic information matched with the characteristic information of the detection object from the first sorted video image, if the detection object y is found, determining the first sorted video image as a frame of video image containing the detection object, ending the searching process, otherwise, entering the step S2;
s2, searching a detection object y with characteristic information matched with the characteristic information of the detection object from the second sorted video image, if the detection object y is found, determining the second sorted video image as a frame of video image containing the detection object, ending the searching process, otherwise, entering the step S3;
s3, searching a detection object y with characteristic information matched with the characteristic information of the detection object from the third sequenced video image, if the detection object y is found, determining the third sequenced video image as a frame of video image containing the detection object, and ending the searching process, otherwise, because the first 3 frames of video images of the current video image are searched, the detection object y is not found, and the searching process is also ended at this moment.
In an embodiment of the present invention, the searching for the detection object y whose feature information matches with the feature information of the detection object in each frame of the video image may specifically include:
assuming that the detection object is a detection object x;
matching feature information of each object to be matched in the frame of video image with the object x, and if the object to be matched and the object x meet preset feature information matching conditions, determining the object to be matched as the object y;
the detection object to be matched and the detection object x meet the preset characteristic information matching condition is as follows: the difference between the image feature of the object to be matched and the image feature of the object x is smaller than a preset feature difference threshold, the difference between the area of the object to be matched and the area of the object x is smaller than a preset area difference threshold, and the included angle between the motion vector corresponding to the object to be matched and the test vector corresponding to the object x is smaller than a preset included angle threshold (that is, the motion trajectories of the two objects to be matched are kept within a reasonable variation range).
Here, the area of the detection object to be matched may be determined according to size information of the detection object to be matched, and the area of the detection object x may be determined according to size information of the detection object x. The motion vector corresponding to the object to be detected may be determined according to the position information of the object to be detected in the video image of the frame including the object to be detected, which is found from the first N frames of video images of the video image to which the object to be detected belongs, and the position information of the object to be detected in the video image to which the object to be detected belongs, that is, a vector formed by a directional connecting line from the previous position to the subsequent position. The check vector of the detected object x corresponding to the detected object to be matched can be determined according to the position information of the detected object to be matched and the position information of the detected object x, namely, a vector formed by directional connecting lines from the previous position to the next position, for example, if the position information of the detected object to be matched is p1 and the position information of the detected object x is p2, the check vector of the detected object x corresponding to the detected object to be matched is a vector formed by directional connecting lines from p1 to p 2.
It should be noted that, when each object to be detected in the frame of video image is matched with the object to be detected x for feature information, since it cannot be determined that the object to be detected to be matched is the object to be detected y to be searched, it cannot be determined that the motion vector of the object to be detected x at this time, in order to perform feature information matching, a concept of a check vector of the object to be detected x corresponding to the object to be detected is proposed, and its actual meaning is: and determining a motion vector corresponding to the detection object x on the basis of the assumed condition that the detection object to be matched is the detection object y to be searched, wherein if the detection object to be matched is the detection object y to be searched, the detection vector of the detection object x corresponding to the detection object to be matched is the motion vector of the detection object x. Therefore, in the embodiment of the present invention, the inspection vector of the inspection object x corresponding to the inspection object to be matched may be determined according to the position information of the inspection object to be matched in the video image to which the inspection object x belongs (the starting point of the inspection vector) and the position information of the inspection object x in the current video image (the ending point of the inspection vector).
It can be seen that, when each object to be matched in the frame of video image is matched with the feature information of the object x, the matching determination is actually performed according to the three aspects of the image features, the area, and the motion trajectory of the two objects in the video image, which are determined according to the feature information of the two objects, which is a preferred scheme.
Step 203, determining a motion vector corresponding to the detection object based on the position information of the detection object in the found frame of video image and the position information of the detection object in the current video image.
In this embodiment of the present invention, determining a motion vector corresponding to the detection object based on the position information of the detection object in the found frame of video image and the position information of the detection object in the current video image specifically includes: using the position coordinates of the detection object in the found frame video image as the starting point of the motion vector of the detection object, and using the position coordinates of the detection object in the current video image as the ending point of the motion vector of the detection object, that is: and the directed connection line from the starting point to the ending point is the motion vector corresponding to the detection object.
And 204, determining whether the detected object is out of range according to the motion vector corresponding to the detected object and a boundary line vector preset in the current video image.
In general, the position of the camera is fixed, the shooting range is also fixed, if a certain area in the shooting range is prohibited from entering or exiting due to some reasons, each edge of the area corresponds to an out-of-range line in the video image, and after a direction is set for the out-of-range line, the direction is the boundary line vector preset in the current video image in the embodiment of the present invention, and actually, each frame of the video image shot by the camera has the out-of-range vector.
As can be seen from the embodiment of the present invention shown in fig. 2, in the present invention, the process of identifying and analyzing the video image is separated, and the boundary-crossing judgment is directly performed on the basis of the existing identification result of the video image, so that the resource consumption required by image identification can be reduced. In addition, the invention does not use a tracking algorithm in the process of detecting the boundary crossing of the detection object, has less resource consumption, determines the motion vector of the detection object in the (N + 1) th frame of video image through the previous N frames of video images and determines whether the detection object crosses the boundary according to the motion vector, thereby reducing the tracking failure condition caused by the fact that the detection object is covered in partial video images and improving the accuracy of the boundary crossing detection result.
Example III,
Referring to fig. 3, fig. 3 is a flowchart of a third out-of-bounds detection method according to an embodiment of the present invention, and as shown in fig. 3, the method mainly includes the following steps:
step 3011, receiving the current video image and the position information and the size information of the detection object in the current video image.
Step 3012, according to the position information and size information of each detection object in the current video image, performing fusion processing on each detection object in the current video image to obtain a new detection object, replacing the original detection object in the current video image with the new detection object, and determining the position information and size information of the new detection object.
Step 3013, extract the image feature of the detection object from the current video image based on the position information and size information of the detection object in the current video image.
The detection target instep 3013 is a new detection target obtained after the fusion processing.
Steps 3011 to 3013 are a specific implementation ofstep 101 in the embodiment of the present invention shown in fig. 1. The position information, the size information, and the image feature of the detection object all belong to feature information of the detection object.
In this embodiment, the detection object after the fusion process is subjected to the boundary crossing detection. The image feature of the detection object may be a grayscale histogram feature, an RGB histogram feature, or any feature capable of reflecting image information of the detection object.
In an embodiment of the present invention, the position information of the detection object may include position coordinates of the detection object, the size information of the detection object may include a width and a height, and the frame corresponding to the detection object in the current video image may be determined according to the position coordinates and the size information of the detection object, for example, when the position coordinates are a central point of the frame corresponding to the detection object, the four-corner coordinates of the frame corresponding to the detection object may be determined according to the size information of the detection object, and the frame corresponding to the detection object may be determined accordingly, and after the frame corresponding to the detection object is determined, the image information in the frame may be extracted to obtain the image feature of the detection object.
In an embodiment of the present invention, the size information includes a width and a height;
according to the position information and the size information of each initial detection object in the current video image, carrying out fusion processing on the initial detection objects to obtain a new detection object, and specifically comprising the following steps:
determining a frame body, corresponding to each initial detection object, in the current video image according to the position information and the size information of each initial detection object in the current video image;
calculating the overlapping degree of the frame bodies corresponding to any two detection objects in the current video image;
and fusing a plurality of frames with the overlapping degree exceeding a preset overlapping degree threshold into one frame, and taking the image content covered by the fused frame as a new detection object.
In practical implementation, a specific implementation method for fusing a plurality of frame bodies may be as follows: the four-corner coordinates of the plurality of frames are compared, the minimum abscissa and the minimum ordinate among the four-corner coordinates of the plurality of frames are taken as the lower-left-corner coordinates of the fused frame, the minimum abscissa and the maximum ordinate among the four-corner coordinates of the plurality of frames are taken as the upper-left-corner coordinates of the fused frame, the maximum abscissa and the minimum ordinate among the four-corner coordinates of the plurality of frames are taken as the lower-right-corner coordinates of the fused frame, and the maximum abscissa and the maximum ordinate among the four-corner coordinates of the plurality of frames are taken as the upper-right-corner coordinates of the fused frame.
Step 302, according to the feature information of the detection object, finding out a frame of video image containing the detection object from the previous N frames of video images of the current video image.
Step 303, determining a motion vector corresponding to the detection object based on the position information of the detection object in the found frame of video image and the position information of the detection object in the current video image.
And step 304, determining whether the detected object is out of range according to the motion vector corresponding to the detected object and a boundary line vector preset in the current video image.
Theabove steps 302 to 304 are the same as the implementation principle of thesteps 202 to 204 shown in fig. 2, and are not described again.
As can be seen from the embodiment of the present invention shown in fig. 3, in the present invention, the boundary crossing determination is performed based on the existing recognition result of the video image, so that the resource consumption required by image recognition can be reduced, and in addition, by using the method of detecting object fusion, the boundary crossing detection can be performed after the detecting objects possibly belonging to different parts of the same object are fused into one detecting object, so that the number of the detecting objects can be reduced, and further, the resource consumption can be reduced. In addition, the invention does not use a tracking algorithm in the process of detecting the boundary crossing of the detection object, has less resource consumption, determines the motion vector of the detection object in the (N + 1) th frame of video image by using the previous N frames of video images and determines whether the detection object crosses the boundary according to the motion vector, thereby reducing the tracking failure condition caused by the fact that the detection object is covered in partial video images and improving the accuracy of the boundary crossing detection result.
Example four,
Referring to fig. 4, fig. 4 is a flowchart of a four-out-of-bounds detection method according to an embodiment of the present invention, and as shown in fig. 4, the method mainly includes the following steps:
step 401, obtaining a current video image and feature information of a detection object in the current video image.
In the embodiment of the present invention, the feature information includes position information.
Step 401 can be implemented by step 2011-301shown in fig. 2, or by step 3011-3013 shown in fig. 3.
Step 402, according to the feature information of the detection object, finding out a frame of video image containing the detection object from the previous N frames of video images of the current video image.
Here, N is a preset value, for example, 5.
Thisstep 402 is the same as the implementation principle ofstep 202 shown in fig. 2.
Step 403, determining a motion vector corresponding to the detection object based on the position information of the detection object in the found frame of video image and the position information of the detection object in the current video image.
Thisstep 403 is the same as the implementation principle ofstep 203 shown in fig. 2.
Step 404, judging whether the motion vector corresponding to the detected object intersects with the boundary line vector, if so, further calculating a mode of a cross product of the motion vector corresponding to the detected object and the boundary line vector, and determining whether the detected object crosses the boundary according to the mode of the cross product of the two vectors and a border crossing direction preset for the boundary line vector.
Thisstep 404 is a detailed refinement ofstep 104 shown in fig. 1, step 204 shown in fig. 2, and step 304 shown in fig. 3.
In the embodiment of the present invention, the method for determining whether the motion vector corresponding to the detection object intersects with the boundary line vector may specifically be as follows:
assuming that coordinates of a starting point A and an ending point B of a motion vector corresponding to the detection object are (Ax, Ay) and (Bx, By), respectively, and coordinates of a starting point C and an ending point D of the boundary line vector are (Cx, Cy), (Dx, Dy), if the motion vector corresponding to the detection object and the boundary line vector meet a preset intersection condition, determining that the motion vector and the boundary line vector intersect, otherwise, determining that the motion vector and the boundary line vector do not intersect;
wherein, the step of detecting whether the motion vector corresponding to the object and the boundary line vector meet the preset intersection condition comprises:
the maximum value of the conditions one, Ax and Bx is not less than the minimum value of Cx and Dx;
the maximum value of the second condition, Ay and By is not less than the minimum value of Cy and Dy;
the maximum value of the conditions three, Cx and Dx is not less than the minimum value of Ax and Bx;
the maximum value of Cy and Dy is not less than the minimum value of Ay and By;
a condition five, wherein the product of the modulus of the cross product of the vector AC and the vector AB and the modulus of the cross product of the vector AB and the vector AD is greater than 0;
conditional six, the product of the modulus of the cross product of vector CA and vector CD and the modulus of the cross product of vector CD and vector CB is greater than 0.
In the embodiment of the invention, two sides which take the limit vector as a boundary line are respectively a first side and a second side; the first side is the left side or the lower side of the dividing line; if the second side is the right side or the upper side of the boundary line, the method for determining whether the detection object is out of range according to the modulus of the cross product of the two vectors and the out-of-range direction preset for the boundary line vector may specifically be as follows:
if the border crossing direction is from the first side to the second side, determining that the detection object is border crossing if the modulus of the cross product of the two vectors is a positive value, and determining that the detection object is not border crossing if the modulus of the cross product of the two vectors is a negative value;
if the cross-border direction is from the second side to the first side, determining that the detection object is not cross-border if the modulus of the cross product of the two vectors is a positive value, and determining that the detection object is cross-border if the modulus of the cross product of the two vectors is a negative value;
determining that the detection object is out of range if a modulus of a cross product of the two vectors is not 0 if the out of range direction is from the first side to the second side or the out of range direction is from the second side to the first side.
As can be seen from the embodiment of the present invention shown in fig. 4, the present invention finds one frame of video image containing the detection object from the previous N frames of video image, determining the motion vector of the detection object according to the position information of the detection object in the current video image and the found frame video image, then the boundary crossing judgment is carried out on the detection object according to the motion vector of the detection object and a preset boundary crossing vector, a tracking algorithm is not used in the process, the resource consumption is less, and the invention determines the motion vector of the detected object in the N +1 frame video image by the first N frame video image and determines whether the detected object is out of range according to the motion vector, the method can reduce the tracking failure condition caused by the fact that the detection object is hidden in a part of video images, and can improve the accuracy of the border-crossing detection result.
The above 4 embodiments are used to describe the boundary crossing detection method provided by the present invention in detail, and the present invention also provides a boundary crossing detection device, which is described in detail below with reference to fig. 5.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an out-of-range detection apparatus provided in an embodiment of the present invention, as shown in fig. 5, the apparatus includes a non-transitory computer-readable storage medium 501, and aprocessor 502 connected to the non-transitory computer-readable storage medium 501 through a bus;
the non-transitory computerreadable storage medium 501 for storing a computer program executable on theprocessor 502, theprocessor 502 implementing the following steps when executing the program:
acquiring a current video image and characteristic information of a detection object in the current video image, wherein the characteristic information comprises position information;
according to the characteristic information of the detection object, finding out a frame of video image containing the detection object from the previous N frames of video images of the current video image; wherein N is a preset value;
determining a motion vector corresponding to the detection object based on the position information of the detection object in the found frame of video image and the position information of the detection object in the current video image;
and determining whether the detected object is out of range according to the motion vector corresponding to the detected object and a boundary line vector preset in the current video image.
In the embodiment of the present invention, the feature information further includes size information and image features;
in an optional embodiment of the present invention, the acquiring, by the processor 505, feature information of the current video image and the detection object in the current video image includes:
receiving a current video image and position information and size information of a detection object in the current video image;
image features of the detection object are extracted from the current video image based on the position information and the size information of the detection object in the current video image.
In another optional embodiment of the present invention, the acquiring, by theprocessor 502, feature information of the current video image and the detection object in the current video image includes:
receiving a current video image and position information and size information of an initial detection object in the current video image;
according to the position information and the size information of each initial detection object in the current video image, carrying out fusion processing on the initial detection objects to obtain new detection objects, and determining the position information and the size information of the new detection objects;
and taking the new detection object as a detection object in the current video image, and extracting the image characteristics of the detection object from the current video image based on the position information and the size information of the detection object in the current video image.
In the apparatus of fig. 5, the size information includes a width and a height;
theprocessor 502, according to the position information and the size information of each initial detection object in the current video image, performs fusion processing on the initial detection object to obtain a new detection object, including:
determining a frame body, corresponding to each initial detection object, in the current video image according to the position information and the size information of each initial detection object in the current video image;
calculating the overlapping degree of the frame bodies corresponding to any two detection objects in the current video image;
and fusing a plurality of frames with the overlapping degree exceeding a preset overlapping degree threshold into one frame, and taking the image content covered by the fused frame as a new detection object.
In the device shown in figure 5 of the drawings,
theprocessor 502, according to the feature information of the detection object, finds out a frame of video image containing the detection object from the previous N frames of video images of the current video image, including:
according to the sequence of shooting time of the first N frames of video images of the current video image from back to front, sequentially searching a detection object y with characteristic information matched with the characteristic information of the detection object in each frame of video image, determining the video image of the searched detection object y as a frame of video image containing the detection object until the detection object y is found, or determining that the N frames of video images are found and the detection object y is not found, and ending the search.
In the device shown in figure 5 of the drawings,
theprocessor 502, searching for the detection object y whose feature information matches with the feature information of the detection object in each frame of video image, includes:
assuming that the detection object is a detection object x;
matching feature information of each object to be matched in the frame of video image with the object x, and if the object to be matched and the object x meet preset feature information matching conditions, determining the object to be matched as the object y;
the detection object to be matched and the detection object x meet the preset characteristic information matching condition is as follows: the difference between the image characteristics of the to-be-matched detection object and the image characteristics of the detection object x is smaller than a preset characteristic difference threshold, the difference between the area of the to-be-matched detection object and the area of the detection object x is smaller than a preset area difference threshold, and the included angle between the motion vector corresponding to the to-be-matched detection object and the inspection vector corresponding to the to-be-matched detection object x is smaller than a preset included angle threshold; the area of the detection object to be matched is determined according to the size information of the detection object to be matched, and the area of the detection object x is determined according to the size information of the detection object x; the motion vector corresponding to the object to be matched is determined according to the position information of the object to be matched in a frame of video image containing the object to be matched, which is found from the first N frames of video images of the video image to which the object to be matched belongs, and the position information of the object to be matched in the video image to which the object to be matched belongs; and the detection vector of the detection object x corresponding to the detection object to be matched is determined according to the position information of the detection object to be matched and the position information of the detection object x.
In the device shown in figure 5 of the drawings,
the location information comprises location coordinates;
theprocessor 502, determining a motion vector corresponding to the detection object based on the position information of the detection object in the found frame of video image and the position information of the detection object in the current video image, includes:
and taking the position coordinates of the detection object in the found frame video image as the starting point of the motion vector of the detection object, and taking the position coordinates of the detection object in the current video image as the ending point of the motion vector of the detection object.
In the device shown in figure 5 of the drawings,
theprocessor 502, determining whether the detected object is out of range according to the motion vector corresponding to the detected object and a boundary line vector preset in the current video image, includes:
and judging whether the motion vector corresponding to the detected object is intersected with the boundary vector, if so, further calculating a cross product module of the motion vector corresponding to the detected object and the boundary vector, and determining whether the detected object is out of range according to the cross product module of the two vectors and an out-of-range direction preset for the boundary vector.
In the device shown in figure 5 of the drawings,
theprocessor 502, when determining whether the motion vector corresponding to the detection object intersects with the boundary line vector, is configured to:
assuming that coordinates of a starting point A and an ending point B of a motion vector corresponding to the detection object are (Ax, Ay) and (Bx, By), respectively, and coordinates of a starting point C and an ending point D of the boundary line vector are (Cx, Cy), (Dx, Dy), if the motion vector corresponding to the detection object and the boundary line vector meet a preset intersection condition, determining that the motion vector and the boundary line vector intersect, otherwise, determining that the motion vector and the boundary line vector do not intersect;
wherein, the step of detecting whether the motion vector corresponding to the object and the boundary line vector meet the preset intersection condition comprises:
the maximum value of the conditions one, Ax and Bx is not less than the minimum value of Cx and Dx;
the maximum value of the second condition, Ay and By is not less than the minimum value of Cy and Dy;
the maximum value of the conditions three, Cx and Dx is not less than the minimum value of Ax and Bx;
the maximum value of Cy and Dy is not less than the minimum value of Ay and By;
a condition five, wherein the product of the modulus of the cross product of the vector AC and the vector AB and the modulus of the cross product of the vector AB and the vector AD is greater than 0;
conditional six, the product of the modulus of the cross product of vector CA and vector CD and the modulus of the cross product of vector CD and vector CB is greater than 0.
In the device shown in figure 5 of the drawings,
assuming that two sides taking the limit vector as a boundary line are respectively a first side and a second side; the first side is the left side or the lower side of the dividing line; the second side is the right side or the upper side of the dividing line;
theprocessor 502, when determining whether the detected object is out of range according to the modulus of the cross product of the two vectors and the out-of-range direction preset for the boundary vector, is configured to:
if the border crossing direction is from the first side to the second side, determining that the detection object is border crossing if the modulus of the cross product of the two vectors is a positive value, and determining that the detection object is not border crossing if the modulus of the cross product of the two vectors is a negative value;
if the cross-border direction is from the second side to the first side, determining that the detection object is not cross-border if the modulus of the cross product of the two vectors is a positive value, and determining that the detection object is cross-border if the modulus of the cross product of the two vectors is a negative value;
determining that the detection object is out of range if a modulus of a cross product of the two vectors is not 0 if the out of range direction is from the first side to the second side or the out of range direction is from the second side to the first side.
Embodiments of the present invention also provide a non-transitory computer readable storage medium storing instructions that, when executed by a processor, cause the processor to perform the steps in the out-of-range detection method as shown in fig. 1-4.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (12)

the detection object to be matched and the detection object x meet the preset characteristic information matching condition is as follows: the difference between the image characteristics of the to-be-matched detection object and the image characteristics of the detection object x is smaller than a preset characteristic difference threshold, the difference between the area of the to-be-matched detection object and the area of the detection object x is smaller than a preset area difference threshold, and the included angle between the motion vector corresponding to the to-be-matched detection object and the inspection vector corresponding to the to-be-matched detection object x is smaller than a preset included angle threshold; the area of the detection object to be matched is determined according to the size information of the detection object to be matched, and the area of the detection object x is determined according to the size information of the detection object x; the motion vector corresponding to the object to be matched is determined according to the position information of the object to be matched in a frame of video image containing the object to be matched, which is found from the first N frames of video images of the video image to which the object to be matched belongs, and the position information of the object to be matched in the video image to which the object to be matched belongs; and the detection vector of the detection object x corresponding to the detection object to be matched is determined according to the position information of the detection object to be matched and the position information of the detection object x.
CN202010252853.XA2020-04-022020-04-02Out-of-range detection method, device and storage mediumActiveCN113496500B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202010252853.XACN113496500B (en)2020-04-022020-04-02Out-of-range detection method, device and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202010252853.XACN113496500B (en)2020-04-022020-04-02Out-of-range detection method, device and storage medium

Publications (2)

Publication NumberPublication Date
CN113496500Atrue CN113496500A (en)2021-10-12
CN113496500B CN113496500B (en)2024-09-20

Family

ID=77994300

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202010252853.XAActiveCN113496500B (en)2020-04-022020-04-02Out-of-range detection method, device and storage medium

Country Status (1)

CountryLink
CN (1)CN113496500B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20120134541A1 (en)*2010-11-292012-05-31Canon Kabushiki KaishaObject tracking device capable of detecting intruding object, method of tracking object, and storage medium
CN107657626A (en)*2016-07-252018-02-02浙江宇视科技有限公司The detection method and device of a kind of moving target
CN107872644A (en)*2016-09-232018-04-03亿阳信通股份有限公司Video frequency monitoring method and device
CN110516620A (en)*2019-08-292019-11-29腾讯科技(深圳)有限公司Method for tracking target, device, storage medium and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20120134541A1 (en)*2010-11-292012-05-31Canon Kabushiki KaishaObject tracking device capable of detecting intruding object, method of tracking object, and storage medium
CN107657626A (en)*2016-07-252018-02-02浙江宇视科技有限公司The detection method and device of a kind of moving target
CN107872644A (en)*2016-09-232018-04-03亿阳信通股份有限公司Video frequency monitoring method and device
CN110516620A (en)*2019-08-292019-11-29腾讯科技(深圳)有限公司Method for tracking target, device, storage medium and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
洪虹;李文耀;: "基于背景重构的运动对象越界侦测方法", 电视技术, no. 07, 2 April 2012 (2012-04-02)*

Also Published As

Publication numberPublication date
CN113496500B (en)2024-09-20

Similar Documents

PublicationPublication DateTitle
CN110427905B (en)Pedestrian tracking method, device and terminal
CN110414507B (en)License plate recognition method and device, computer equipment and storage medium
CN108960211B (en)Multi-target human body posture detection method and system
CN105938622B (en)Method and apparatus for detecting object in moving image
CN102999918B (en)Multi-target object tracking system of panorama video sequence image
CN109035295B (en)Multi-target tracking method, device, computer equipment and storage medium
US20170151943A1 (en)Method, apparatus, and computer program product for obtaining object
CN105868708A (en)Image object identifying method and apparatus
CN109902576B (en) A training method and application of a head and shoulders image classifier
US8718362B2 (en)Appearance and context based object classification in images
US20230009925A1 (en)Object detection method and object detection device
CN105893957A (en)Method for recognizing and tracking ships on lake surface on the basis of vision
CN111881775B (en)Real-time face recognition method and device
Jun et al.LIDAR and vision based pedestrian detection and tracking system
CN109635649B (en)High-speed detection method and system for unmanned aerial vehicle reconnaissance target
JP2011248525A (en)Object detection device and detection method thereof
Zeng et al.Fast human detection using mi-sVM and a cascade of HOG-LBP features
CN110647821B (en)Method and device for object identification through image identification
CN113496500A (en)Out-of-range detection method and device and storage medium
CN110287786B (en)Vehicle information identification method and device based on artificial intelligence anti-interference
CN112651369A (en)Method and device for identifying pedestrians in monitoring scene
US9646386B2 (en)Method and apparatus for generating temporally consistent superpixels
JP2018109824A (en)Electronic control device, electronic control system, and electronic control method
CN108664853B (en)Face detection method and device
KR20160148806A (en)Object Detecter Generation Method Using Direction Information, Object Detection Method and Apparatus using the same

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp