Movatterモバイル変換


[0]ホーム

URL:


CN111539974A - Method and device for determining track, computer storage medium and terminal - Google Patents

Method and device for determining track, computer storage medium and terminal
Download PDF

Info

Publication number
CN111539974A
CN111539974ACN202010265917.XACN202010265917ACN111539974ACN 111539974 ACN111539974 ACN 111539974ACN 202010265917 ACN202010265917 ACN 202010265917ACN 111539974 ACN111539974 ACN 111539974A
Authority
CN
China
Prior art keywords
image frame
adjacent image
image frames
target object
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010265917.XA
Other languages
Chinese (zh)
Other versions
CN111539974B (en
Inventor
林晓明
江金陵
鲁邹尧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Mingsheng Pinzhi Artificial Intelligence Technology Co ltd
Original Assignee
Beijing Mininglamp Software System Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Mininglamp Software System Co ltdfiledCriticalBeijing Mininglamp Software System Co ltd
Priority to CN202010265917.XApriorityCriticalpatent/CN111539974B/en
Publication of CN111539974ApublicationCriticalpatent/CN111539974A/en
Application grantedgrantedCritical
Publication of CN111539974BpublicationCriticalpatent/CN111539974B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The embodiment of the invention discloses a method, a device, a computer storage medium and a terminal for realizing track determination, which realize the judgment of whether a target object is the same object or not based on the moving position of the target object in adjacent image frames, realize the automatic generation of the moving track of the target object based on the judgment result and improve the activity analysis efficiency of the target object.

Description

Method and device for determining track, computer storage medium and terminal
Technical Field
The present disclosure relates to, but not limited to, multimedia technologies, and more particularly, to a method, an apparatus, a computer storage medium, and a terminal for performing trajectory determination.
Background
With the improvement of living standard of people, sanitary safety becomes more and more important, food safety is an important component in sanitary safety, and kitchen sanitary safety is an important component in food safety. In kitchen sanitation and safety, certain organisms such as mice can bring dangerous sanitation and safety problems; restaurant kitchens as consumer-oriented places of business, monitoring of such creatures is an important task.
Taking a restaurant kitchen as an example, a camera used in the restaurant kitchen is generally a fixed camera, and the background change is small; in the related technology, mice in a video are detected and positioned mainly by combining a moving object detection model and an image classification model which are modeled by a mixed Gaussian background, and the processing process roughly comprises the following steps: 1. detecting a moving object and the position of the moving object in the video through a moving object detection model; 2. classifying the detected images containing the moving objects by using a picture classification model, and determining the contained moving objects to be images of mice; 3. and determining mice appearing in the video according to the output result of the image classification model, and positioning the mice.
The method only detects and positions the creatures, and the user analyzes the activity information of the detection and positioning results of the creatures, so that the method for manually analyzing the biological track has low efficiency and long time consumption, and influences the analysis efficiency of the user on the track of the biological activity.
Disclosure of Invention
The following is a summary of the subject matter described in detail herein. This summary is not intended to limit the scope of the claims.
The embodiment of the invention provides a method and a device for determining a track, a computer storage medium and a terminal, which can be used for determining the mouse activity track.
The embodiment of the invention provides a method for realizing track determination, which comprises the following steps:
determining the moving position of a target object for an image frame containing the target object in a video;
judging whether the target objects in the adjacent image frames are the same object or not according to the moving positions of the target objects in the adjacent image frames;
and generating the moving track of each target object according to the judgment result of whether the target objects in the adjacent image frames are the same object.
On the other hand, an embodiment of the present invention further provides a computer storage medium, where a computer program is stored, and when the computer program is executed by a processor, the method for implementing trajectory determination is implemented.
In another aspect, an embodiment of the present invention further provides a terminal, including: a memory and a processor, the memory having a computer program stored therein; wherein,
the processor is configured to execute the computer program in the memory;
the computer program, when executed by the processor, implements a method of implementing trajectory determination as described above.
In another aspect, an embodiment of the present invention further provides a device for implementing track determination, including: the device comprises a position determining unit, a judging unit and a track generating unit; wherein,
the position determining unit is configured to: determining the moving position of each target object for the image frame containing the target object in the video;
the judgment unit is used for: judging whether the target objects in the adjacent image frames are the same object or not according to the moving positions of the target objects in the adjacent image frames;
the track generation unit is used for: and generating the moving track of each target object according to the judgment result of whether the target objects in the adjacent image frames are the same object.
The application includes: determining the moving position of a target object for an image frame containing the target object in a video; judging whether the target objects in the adjacent image frames are the same object or not according to the moving positions of the target objects in the adjacent image frames; and generating the moving track of each target object according to the judgment result of whether the target objects in the adjacent image frames are the same object. The embodiment of the invention realizes the judgment of whether the target objects are the same object or not based on the moving positions of the target objects in the adjacent image frames, realizes the automatic generation of the moving track of the target objects based on the judgment result, and improves the moving analysis efficiency of the target objects.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the example serve to explain the principles of the invention and not to limit the invention.
FIG. 1 is a flow chart of a method for implementing trajectory determination according to an embodiment of the present invention;
FIG. 2 is a block diagram of an apparatus for determining a track according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an exemplary active position of the application of the present invention;
fig. 4 is a schematic diagram of an exemplary image frame to which the present invention is applied.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. It should be noted that the embodiments and features of the embodiments in the present application may be arbitrarily combined with each other without conflict.
The steps illustrated in the flow charts of the figures may be performed in a computer system such as a set of computer-executable instructions. Also, while a logical order is shown in the flow diagrams, in some cases, the steps shown or described may be performed in an order different than here.
The inventor of the application analyzes and finds that when a plurality of mice are contained in an image, the related technology does not analyze and process the movement track of each mouse, but in places including a kitchen of a restaurant, a plurality of mice generally appear, and how to determine the movement track of the mice becomes a problem to be solved.
Fig. 1 is a flowchart of a method for determining a track according to an embodiment of the present invention, as shown in fig. 1, including:
step 101, determining the moving position of a target object for an image frame containing the target object in a video;
it should be noted that, in the embodiment of the present invention, the moving position of the target object may be determined by referring to an image segmentation algorithm in the related art; taking a target object as a mouse as an example, the embodiment of the invention can segment the image of the position covered by the mouse through an image segmentation algorithm as the moving position of the mouse after the mouse is identified.
In an exemplary embodiment, the target object may include a mouse. When the target object is a mouse, the video in an embodiment may include a night video collected based on an infrared camera.
102, judging whether the target objects in the adjacent image frames are the same object or not according to the moving positions of the target objects in the adjacent image frames;
in one exemplary embodiment, the determining whether the target objects in the adjacent image frames are the same object includes:
determining the distance between target objects in adjacent image frames according to the position information of the moving position;
judging whether the target objects in the adjacent image frames are the same object or not according to the determined distance between the target objects in the adjacent image frames;
wherein the activity position is a preset geometric figure region position; the location information includes: and defining reference point coordinate information of the determined activity position, size information of the activity position and size information of the image frame based on a preset coordinate system and the reference point.
It should be noted that the active position in the embodiment of the present invention may be a geometric area as known to those skilled in the art as follows: circular, oval, rectangular, square, triangular, and the like.
The embodiment of the invention realizes the determination of the moving track of the target object based on the moving position of the target object in the adjacent image frames.
In one exemplary embodiment, taking the active position as a rectangular position as an example, the distance between the target objects in adjacent image frames can be determined by equation (1):
Figure BDA0002441233320000041
wherein d _ center _ x represents an absolute value of a horizontal coordinate difference value of a center point of the moving position of the target object in a subsequent image frame and a previous image frame in adjacent image frames; d _ center _ y represents an absolute value of a difference value of vertical coordinates of a center point of the moving position of the target object in the following image frame and the preceding image frame in the adjacent image frames; dx1 represents the length of the active position of the target object in the preceding image frame in the adjacent image frame; dx2 represents the length of the active position of the target object in the following image frame in the adjacent image frame; dy1 represents the width of the active position of the target object in the preceding image frame in the adjacent image frame; dy2 represents the width of the active position of the target object in the next image frame in the adjacent image frame.
In one exemplary embodiment, determining whether the target objects in adjacent image frames are the same object comprises:
when the distance between the target objects in the adjacent image frames is smaller than a preset distance threshold value, determining that the target objects in the adjacent image frames are the same object;
and when the distance between the target objects in the adjacent image frames is greater than or equal to the distance threshold value, determining that the target objects in the adjacent image frames are different objects.
It should be noted that, the distance threshold in the embodiment of the present invention may be set by a person skilled in the art based on the number of frames of pictures transmitted per second of a video, and the larger the number of frames is, the smaller the threshold is; in one exemplary implementation, a threshold of 1 may be set for a video with 18 frames per second of picture frames transmitted.
103, generating a moving track of each target object according to a judgment result of whether the target objects in the adjacent image frames are the same object;
in one exemplary embodiment, generating the movement trajectory of each target object includes:
when the target objects in the adjacent image frames are different objects, setting the moving position of the target object in the subsequent image frame as the initial position of the moving track of the target object;
when the target objects in the adjacent image frames are the same object, the moving position of the target object in the subsequent image frame is added to the moving track of the target object in the previous image frame.
The embodiment of the invention realizes the judgment of whether the target objects are the same object or not based on the moving positions of the target objects in the adjacent image frames, realizes the automatic generation of the moving track of the target objects based on the judgment result, and improves the moving analysis efficiency of the target objects.
In an exemplary embodiment, when the target objects in the adjacent image frames are the same object, the method in the embodiment of the present invention further includes:
determining whether the number of target objects in the subsequent image frame that are determined to be the same object as the target object in the previous image frame is greater than 1;
and when the number of the target objects which are judged to be the same as the target object in the previous image frame in the later image frame is more than 1, integrating all the moving positions of the target objects which are the same as the target object in the previous image frame into one moving position through a preset fusion function.
In an exemplary embodiment, when the active position is a rectangular region, the fusion function (also called aggregation function, merge function) may include merge (R1, R2, …), where R1, R2 … are the active positions of the target object that is the same object as the target object in the previous image frame, respectively, and merge (R1, R2, …) represents the smallest rectangular region that contains both R1 and R2 ….
In the related art, when the mobile detection algorithm is used for detection, different body parts of a mouse can be respectively determined as a mouse, for example, the head of the mouse is detected as a mouse, and the tail of the mouse is detected as a mouse; in the embodiment of the invention, if one mouse is identified as two mice in the process of detecting the mouse, the two mice can be integrated into one mouse by judging whether the two mice are the same object or not, and the moving positions belonging to the same mouse are integrated by means of the fusion function, so that the precision of mouse detection is improved, and the analysis of the moving track of the mouse is prevented from being influenced.
In an exemplary embodiment, before determining the moving position of each target object, the method of the embodiment of the present invention further includes:
and determining image frames containing the target object in the video.
It should be noted that, in the embodiments of the present invention, the detection of the target object may be realized by referring to the related art; for example, detecting moving objects in video image frames by a moving object detection model; and classifying the detected image frames containing the moving objects by using a picture classification model, and determining whether the moving objects are target objects.
The embodiment of the invention also provides a computer storage medium, wherein a computer program is stored in the computer storage medium, and when being executed by a processor, the computer program realizes the method for realizing the track determination.
An embodiment of the present invention further provides a terminal, including: a memory and a processor, storing a computer program; wherein,
the processor is configured to execute the computer program in the memory;
the computer program, when executed by a processor, implements a method of implementing trajectory determination as described above.
Fig. 2 is a block diagram of a device for determining a track according to an embodiment of the present invention, as shown in fig. 2, including: the device comprises a position determining unit, a judging unit and a track generating unit; wherein,
the position determining unit is configured to: determining the moving position of each target object for the image frame containing the target object in the video;
the judgment unit is used for: judging whether the target objects in the adjacent image frames are the same object or not according to the moving positions of the target objects in the adjacent image frames;
the track generation unit is used for: and generating the moving track of each target object according to the judgment result of whether the target objects in the adjacent image frames are the same object.
In an exemplary embodiment, the determining unit is specifically configured to:
determining the distance between target objects in adjacent image frames according to the position information of the moving position;
judging whether the target objects in the adjacent image frames are the same object or not according to the determined distance between the target objects in the adjacent image frames;
wherein the activity position is a preset geometric figure region position; the location information includes: and defining reference point coordinate information of the determined activity position, size information of the activity position and size information of the image frame based on a preset coordinate system and the reference point.
In an exemplary embodiment, the determining unit is configured to determine whether the target objects in the adjacent image frames are the same object, and includes:
when the distance between the target objects in the adjacent image frames is smaller than a preset distance threshold value, determining that the target objects in the adjacent image frames are the same object;
and when the distance between the target objects in the adjacent image frames is greater than or equal to the distance threshold value, determining that the target objects in the adjacent image frames are different objects.
In an exemplary embodiment, the track generation unit is specifically configured to:
when the target objects in the adjacent image frames are different objects, setting the moving position of the target object in the subsequent image frame as the initial position of the moving track of the target object;
when the target objects in the adjacent image frames are the same object, the moving position of the target object in the subsequent image frame is added to the moving track of the target object in the previous image frame.
In an exemplary embodiment, the generate trajectory unit is further configured to:
determining whether the number of target objects in the subsequent image frame that are determined to be the same object as the target object in the previous image frame is greater than 1;
and when the number of the target objects which are judged to be the same as the target object in the previous image frame in the later image frame is more than 1, integrating all the moving positions of the target objects which are the same as the target object in the previous image frame into one moving position through a preset fusion function.
In an exemplary embodiment, an embodiment of the present invention further comprises determining an image unit for:
and determining image frames containing the target object in the video.
The embodiment of the invention realizes the judgment of whether the target objects are the same object or not based on the moving positions of the target objects in the adjacent image frames, realizes the automatic generation of the moving track of the target objects based on the judgment result, and improves the moving analysis efficiency of the target objects.
The method of the embodiment of the present invention is briefly described below by using application examples, which are only used for illustrating the present invention and are not used for limiting the protection scope of the present invention.
Application example
To facilitate the presentation of the present application example, the following description is made of definitions referred to for the application example: the moving positions of the mice are represented by rectangular areas R, and the moving positions of the mice in different image frames are respectively distinguished by digital codes. Fig. 3 is a schematic diagram of an exemplary activity location applied to the present invention, and as shown in fig. 3, the location information of the rectangular area can be represented by the following coordinates: (x, y, dx, dy, w, h); wherein, w, h respectively represent the length and width of the video image that the camera was gathered, and x, y, dx and dy respectively represent the abscissa, the ordinate and the length and width of rectangle area of the upper left corner of active position.
The application example is that the target object is a mouse, and a video image at night of a kitchen is acquired through an installed infrared camera; moving objects in the video are detected using a hybrid gaussian model on the captured video. The main principle of the Gaussian mixture background modeling is to construct a background in a video, and then for a picture of each frame, on one hand, the picture and the background are subjected to difference detection so as to detect a foreground in the picture, and the foreground is judged as a moving object;
the application example obtains the image classification model through training in the following way: in the training stage, whether a moving object is a mouse or not is marked to obtain a training sample; inputting the obtained training sample into a deep learning convolutional neural network, and training to obtain an image classification model; the convolutional neural network of the present application example may include: deepening the network layer number (ResNet), dense convolutional network (Densenet), computer vision group (VGG), and the like; after the image classification model is obtained through training, the image frame of the mouse as the moving object is determined by classifying each frame image (image frame) in the video.
For an image frame of a mouse which is a moving object, obtaining the moving position of the mouse through image segmentation, wherein the moving position of the application example is represented by the rectangular region R (x, y, dx, dy, w, h) defined above;
in the present application example, the distance between rats in adjacent image frames is determined by the following formula:
Figure BDA0002441233320000091
wherein dx1 represents the length of the mouse's active position in the preceding image frame in the adjacent image frame; dx2 represents the length of the mouse's active position in the next image frame in the adjacent image frames; dy1 represents the width of the mouse's active position in the preceding image frame in the adjacent image frame; dy2 represents the width of the mouse's active position in the next image frame in the adjacent image frame, d _ center _ x is used to represent the width of the mouse's active position in the adjacent image frame: the absolute value of the difference value of the horizontal coordinates of the central points of the moving positions of the mouse in the rear image frame and the front image frame is calculated by the following formula:
Figure BDA0002441233320000092
d _ center _ y is used to indicate that in the adjacent image frame: the absolute value of the difference value of the vertical coordinates of the central points of the moving positions of the mouse in the rear image frame and the previous image frame is calculated by the following formula:
Figure BDA0002441233320000093
after the distance between the rats in the adjacent image frames is calculated according to the formula, when the distance between the rats in the adjacent image frames calculated by the application example is smaller than a preset distance threshold value, determining that the rats in the two image frames are the same rat; when the distance between the mice in the adjacent image frames is larger than or equal to a preset distance threshold value, determining that the mice in the two image frames are not the same mouse; it should be noted that, when the image frames include a plurality of mice, the application example selects one mouse each time from the adjacent image frames, and determines whether the same mouse is the same mouse according to the corresponding moving position;
when the rats in the adjacent image frames are different rats, setting the moving position of the rat in the subsequent image frame as the initial position of the moving track of the rat;
when the rats in the adjacent image frames are the same rat, the moving position of the rat in the following image frame is added to the moving track of the rat in the preceding image frame.
In the present application example, when the number of mice determined to be the same as the mouse in the previous image frame in the subsequent image frame is greater than 1, all the active positions determined to be the same as the mouse in the previous image frame are integrated into one active position by the fusion function.
FIG. 4 is a schematic diagram of an exemplary image frame of the present invention, FIG. 4 includes 8 image frames, and the mouse is shown with a solid dot; generating an activity track of the first mouse according to the activity position R1 of the mouse, and assuming that the activity track is A, then A is [ R1] (the track A is the first activity track point of the current mouse); two mice appear in the second frame image, and the moving positions of the two mice are assumed to be R2 and R3 respectively; the application example randomly selects R2 or R3, calculates the distance from R1, and judges according to the distance threshold: whether the mice located at R2 and R3 and the mouse located at R1 are the same mouse; assuming that the rats located in R2 and R3 and the rat located in R1 are determined to be the same rat, the present application example may integrate R2 and R3 to obtain an active site including both R2 and R3, and assuming that the active site obtained by integration is R3, add R3 to the trajectory a to obtain a ═ R1, × R3; in the application example, the mice at R2 and R3 are the same mouse and can be detected by a movement detection algorithm; for example, in the same frame of image, the head and the tail of a mouse move, but the body does not move, the head of the mouse may be detected as one mouse by the movement detection algorithm, and the tail of the mouse may be detected as one mouse; since the rats located at R2 and R3 are the same frame image, the present application example is not suitable for retaining both in the activity trace a. For R4 in the third image, calculating the distance between R4 and R3, and assuming that the distance between R4 and R3 is less than the distance threshold, adding R4 to the trajectory a to obtain a ═ R1, × R3, R4; two mice with the activity positions of R5 and R6 appear in the fourth frame image, and the distance between R5 and R4 and the distance between R6 and R4 are calculated; assuming that the distance between R5 and R4 is less than the distance threshold, the distance between R6 and R4 is greater than the distance threshold; adding R5 to the activity trace a to obtain a ═ R1, × R3, R4, R5], generating a new activity trace B for recording the activity trace of the second mouse detected, which is counted as B ═ R6; two mice with the activity positions of R7 and R8 appeared in the fifth image, and the distances of R7 and R5, R7 and R6, R8 and R6, R8 and R6 were calculated; assuming that the distance between R7 and R5 alone is less than the distance threshold and the distance between R8 and R6 alone is less than the distance threshold, R7 and R8 are added to the activity tracks a and B, respectively, to generate a ═ R1, × R3, R4, R5, R7] and B ═ R6, R8.
The following exemplifies the processing of the application example according to the design concept of the programming language:
a fusion function merge (R1, R2, …) defining a plurality of activity positions (new _ x, new _ y, new _ dx, new _ dy, w, h)) represents the smallest rectangular area including R1 and R2 ….
Assuming that one track data track is [ R1, R2, …, Rk ], the distance between one active position Rk +1 and the active track is: distance of the newly added active position Rk in the active trajectory from Rk + 1:
distance(track,Rk+1)=distance(Rk,Rk+1);
assuming that the video has n frames of images in total; m (i) mice are located in the i frame of image, and R (ij) represents the mice in the j activity position in the i frame of image.
Defining all activity traces as tracks [ ] ([ ] denotes an empty list);
defining the maximum distance between adjacent R in the track as dis _ threshold;
definitions [ a, b ] + [ c ] ═ a, b, c ];
let s ═ a, b, c ], then len(s) ═ 3, s [0] ═ a, s [1] ═ b, s [2] ═ c are defined.
The first step is as follows: inputting i to be 0;
the second step is that: i is i +1, if i > n, the calculation is ended;
the third step: let the number of tracks be lt ═ len (tracks), i.e. tracks ═ track _1, track _2,. track _ lt;
the fourth step: setting a temporary screenshot temp _ R [ ], and setting a temporary new _ tracks [ ];
the fifth step: input j is 0;
and a sixth step: j ═ j +1, if j > m (i), go back to the second step;
the seventh step: if tracks [ ], define r (ij) as a new track new _ track [ < r (ij) ];
let temp _ R ═ temp _ R + [ lt +1], lt ═ lt +1, new _ tracks ═ new _ tracks + [ new _ tracks ].
Otherwise, sequentially calculating the distances between R (ij) and all the active tracks in the tracks. Let R (ij) be the closest distance to the kth active track, which is dis _ min. If dis _ min < ═ dis _ threshold, then temp _ rect < + > is defined;
otherwise, calculate the shortest distance between r (ij) and all active tracks in the new _ tracks, assuming the shortest distance to the h-th new _ track therein, is dis _ min 1. If dis _ min1< ═ dis _ threshold, then temp _ R ═ temp _ R + [ len (tracks) + h ], new _ track [ h ] ═ merge (new _ track [ h ] [0], R (ij)) ]isdefined. (the moving positions of the same mouse in the same frame are merged);
otherwise, r (ij) is a new track new _ track ═ r (ij). Let temp _ R ═ temp _ R + [ lt +1], lt ═ lt +1, new _ tracks ═ new _ tracks + [ new _ tracks ]. (divide multiple mouse trajectories).
Eighth step: the new active position in the current image frame is added as the original trajectory. That is, for a plurality of identical elements smaller than len (tracks) in temp _ R, for example, assuming that the values of 1 st and 2 nd in temp _ R are all 1, then track _1 is track _1+ merge (R (i1), R (i2)), and for an active position smaller than len (tracks) where temp _ R appears only once, for example, assuming that 2 nd in temp _ R appears only once and appears at the 3 rd position, then track _2 is track _2+ R (i 3).
The ninth step: new activity traces are added to the tracks, which are tracks + new tracks.
The tenth step: and returning to the second step.
"one of ordinary skill in the art will appreciate that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed by several physical components in cooperation. Some or all of the components may be implemented as software executed by a processor, such as a digital signal processor or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art. ".

Claims (10)

1. A method of implementing trajectory determination, comprising:
determining the moving position of a target object for an image frame containing the target object in a video;
judging whether the target objects in the adjacent image frames are the same object or not according to the moving positions of the target objects in the adjacent image frames;
and generating the moving track of each target object according to the judgment result of whether the target objects in the adjacent image frames are the same object.
2. The method of claim 1, wherein the determining whether the target objects in the adjacent image frames are the same object comprises:
determining the distance between target objects in the adjacent image frames according to the position information of the moving positions;
judging whether the target objects in the adjacent image frames are the same object or not according to the determined distance between the target objects in the adjacent image frames;
the movable position is a preset geometric figure region position; the location information includes: defining reference point coordinate information of the determined moving position, size information of the moving position and size information of the image frame based on a preset coordinate system and a reference point.
3. The method of claim 2, wherein the active position is a rectangular position and the distance between the target objects in the adjacent image frames is determined by the following formula:
Figure FDA0002441233310000011
wherein the d _ center _ x is used for representing that in the adjacent image frame: absolute value of difference of horizontal coordinates of center point of moving position of target object in the following image frame and the preceding image frame; the d _ center _ y is used for representing that in the adjacent image frame: the absolute value of the difference value of the vertical coordinates of the central point of the moving position of the target object in the subsequent image frame and the previous image frame; the dx1 represents the length of the active position of the target object in a preceding image frame in the adjacent image frame; the dx2 represents the length of the active position of the target object in a subsequent image frame in the adjacent image frames; the dy1 represents the width of the active position of the target object in the preceding image frame in the adjacent image frame; the dy2 represents the width of the active position of the target object in the next image frame in the adjacent image frames.
4. The method of claim 2, wherein the determining whether the target objects in the adjacent image frames are the same object comprises:
when the distance between the target objects in the adjacent image frames is smaller than a preset distance threshold value, determining that the target objects in the adjacent image frames are the same object;
when the distance between the target objects in the adjacent image frames is larger than or equal to the distance threshold value, determining that the target objects in the adjacent image frames are different objects.
5. The method according to any one of claims 2 to 4, wherein the generating of the movement track of each target object comprises:
when the target objects in the adjacent image frames are different objects, setting the moving position of the target object in the subsequent image frame as the starting position of the moving track of the target object;
when the target objects in the adjacent image frames are the same object, adding the moving position of the target object in the subsequent image frame to the moving track of the target object in the previous image frame.
6. The method of claim 5, wherein when the target objects in the adjacent image frames are the same object, the method further comprises:
determining whether the number of target objects in the subsequent image frame that are determined to be the same object as the target object in the previous image frame is greater than 1;
and when the number of the target objects which are judged to be the same as the target object in the previous image frame in the later image frame is more than 1, integrating all the moving positions of the target objects which are the same as the target object in the previous image frame into one moving position through a preset fusion function.
7. A method according to any of claims 1 to 3, wherein prior to determining the active position of each target object, the method further comprises:
determining an image frame in the video that includes the target object.
8. A computer storage medium having a computer program stored thereon, which, when being executed by a processor, carries out a method of carrying out trajectory determination according to any one of claims 1 to 7.
9. A terminal, comprising: a memory and a processor, the memory having a computer program stored therein; wherein,
the processor is configured to execute the computer program in the memory;
the computer program, when executed by the processor, implements a method of implementing trajectory determination as recited in any of claims 1-7.
10. An apparatus for implementing trajectory determination, comprising: the device comprises a position determining unit, a judging unit and a track generating unit; wherein,
the position determining unit is configured to: determining the moving position of each target object for the image frame containing the target object in the video;
the judgment unit is used for: judging whether the target objects in the adjacent image frames are the same object or not according to the moving positions of the target objects in the adjacent image frames;
the track generation unit is used for: and generating the moving track of each target object according to the judgment result of whether the target objects in the adjacent image frames are the same object.
CN202010265917.XA2020-04-072020-04-07Method and device for determining track, computer storage medium and terminalActiveCN111539974B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202010265917.XACN111539974B (en)2020-04-072020-04-07Method and device for determining track, computer storage medium and terminal

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202010265917.XACN111539974B (en)2020-04-072020-04-07Method and device for determining track, computer storage medium and terminal

Publications (2)

Publication NumberPublication Date
CN111539974Atrue CN111539974A (en)2020-08-14
CN111539974B CN111539974B (en)2022-11-11

Family

ID=71980444

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202010265917.XAActiveCN111539974B (en)2020-04-072020-04-07Method and device for determining track, computer storage medium and terminal

Country Status (1)

CountryLink
CN (1)CN111539974B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103646253A (en)*2013-12-162014-03-19重庆大学Bus passenger flow statistics method based on multi-motion passenger behavior analysis
CN103929685A (en)*2014-04-152014-07-16中国华戎控股有限公司Video abstract generating and indexing method
CN104966045A (en)*2015-04-022015-10-07北京天睿空间科技有限公司Video-based airplane entry-departure parking lot automatic detection method
CN108664912A (en)*2018-05-042018-10-16北京学之途网络科技有限公司A kind of information processing method, device, computer storage media and terminal
CN109886999A (en)*2019-01-242019-06-14北京明略软件系统有限公司Location determining method, device, storage medium and processor

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103646253A (en)*2013-12-162014-03-19重庆大学Bus passenger flow statistics method based on multi-motion passenger behavior analysis
CN103929685A (en)*2014-04-152014-07-16中国华戎控股有限公司Video abstract generating and indexing method
CN104966045A (en)*2015-04-022015-10-07北京天睿空间科技有限公司Video-based airplane entry-departure parking lot automatic detection method
CN108664912A (en)*2018-05-042018-10-16北京学之途网络科技有限公司A kind of information processing method, device, computer storage media and terminal
CN109886999A (en)*2019-01-242019-06-14北京明略软件系统有限公司Location determining method, device, storage medium and processor

Also Published As

Publication numberPublication date
CN111539974B (en)2022-11-11

Similar Documents

PublicationPublication DateTitle
CN109934065B (en)Method and device for gesture recognition
CN110472599B (en)Object quantity determination method and device, storage medium and electronic equipment
CN110675407B (en)Image instance segmentation method and device, electronic equipment and storage medium
US10896495B2 (en)Method for detecting and tracking target object, target object tracking apparatus, and computer-program product
JP6624877B2 (en) Information processing apparatus, information processing method and program
CN114549582B (en) A method, device and computer-readable storage medium for generating a trajectory map
KR101652261B1 (en)Method for detecting object using camera
CN111126165A (en)Black smoke vehicle detection method and device and electronic equipment
CN110969045A (en)Behavior detection method and device, electronic equipment and storage medium
CN111582032A (en)Pedestrian detection method and device, terminal equipment and storage medium
CN106022266A (en) A target tracking method and device
CN116167969A (en)Lens smudge detection method, device, vehicle, storage medium and program product
CN106570557A (en)Device and method for counting moving objects
KR101690050B1 (en)Intelligent video security system
JP2018197945A (en)Obstacle detection apparatus and obstacle detection method
JP2016162096A (en) Moving object tracking device
CN103413328B (en)Method and device for tracking moving object
CN111539974B (en)Method and device for determining track, computer storage medium and terminal
JP7349290B2 (en) Object recognition device, object recognition method, and object recognition program
JP2019075051A (en)Image processor
CN114708529B (en) Target standing detection method, system, device and storage medium
JP6893812B2 (en) Object detector
CN118053058A (en)Data labeling method based on multi-model fusion and electronic equipment
CN111428626A (en)Moving object identification method and device and storage medium
JP6851246B2 (en) Object detector

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant
TR01Transfer of patent right
TR01Transfer of patent right

Effective date of registration:20230811

Address after:200232 unit 5b06, floor 5, building 2, No. 277, Longlan Road, Xuhui District, Shanghai

Patentee after:Shanghai Mingsheng Pinzhi Artificial Intelligence Technology Co.,Ltd.

Address before:100084 a1002, 10th floor, building 1, yard 1, Zhongguancun East Road, Haidian District, Beijing

Patentee before:MININGLAMP SOFTWARE SYSTEMS Co.,Ltd.


[8]ページ先頭

©2009-2025 Movatter.jp