Disclosure of Invention
Accordingly, an objective of the embodiments of the present invention is to provide a method and apparatus for determining an abnormality of a vehicle track.
In a first aspect, an embodiment of the present invention provides a method for determining an abnormality in a vehicle track, the method including:
obtaining track data of a target vehicle from a video image to be analyzed;
extracting track characteristics of the target vehicle according to the track data;
processing the track features to obtain track feature vectors;
and inputting the track feature vector into a classification model obtained by training in advance, and judging the track abnormal condition of the target vehicle to obtain the abnormal condition of the target vehicle.
Optionally, the method further comprises a step of training a classification model, the step comprising:
obtaining track data of a sample vehicle from a sample video image;
obtaining track characteristics of the sample vehicle according to the track data of the sample vehicle;
processing the track characteristics of the sample vehicle to obtain a track characteristic vector of the sample vehicle;
and inputting the track feature vector of the sample vehicle and the track abnormal condition of the sample vehicle into a classification model, and training the classification model.
Optionally, obtaining track data of the target vehicle from the video image to be analyzed includes:
and calculating track point data of the target vehicle in each video image frame by adopting a tracking algorithm, and forming track data of the target vehicle in the video image to be analyzed by the track point data of all the video image frames, wherein the track point data comprise coordinates of track points.
Optionally, the track feature includes a track point distance difference histogram and a track point maximum distance difference, and the extracting the track feature of the target vehicle according to the track data includes:
and calculating the distance difference between the track points of the adjacent video image frames sampled at equal intervals in the video image to be analyzed, obtaining a track point distance difference histogram according to the distance difference, and extracting the track point maximum distance difference according to the track point distance difference histogram.
Optionally, the track feature further includes a track point angle histogram and a track angle maximum, and the extracting the track feature of the target vehicle according to the track data further includes:
and calculating angle values among track points of three adjacent video image frames sampled at equal intervals in the video image to be analyzed, obtaining a track point angle histogram according to the angle values, and extracting a track angle maximum value according to the track point angle histogram.
Optionally, the track feature further includes track density, and the extracting the track feature of the target vehicle according to the track data further includes:
counting the total number of track points of the target vehicle in the video image to be analyzed;
starting from a first preset video image frame of the video image to be analyzed, calculating a first distance difference of track points of adjacent video image frames in a first preset direction and a second distance difference of track points of adjacent video image frames in a second preset direction, and counting the number of track points of which the sum of the first distance difference and the second distance difference is smaller than a preset threshold value;
and forming the track density by the total number of the track points, and the number of the track points of which the sum of the first distance difference and the second distance difference is smaller than a preset threshold value.
Optionally, the track feature further includes a track vertical offset, and extracting the track feature of the target vehicle according to the track data further includes:
calculating a third distance difference of track points of adjacent video image frames in the video image to be analyzed in a third preset direction from a second preset video image frame of the target vehicle, wherein the third preset direction is the extending direction of a road where the target vehicle is located;
and calculating the sum of all third distance differences of the track points from the second preset video image frame to the last frame to obtain the track vertical offset.
Optionally, the processing the track feature to obtain a track feature vector includes:
and normalizing one or more characteristics of the track point distance histogram, the track point maximum distance difference, the track point angle histogram, the track angle maximum value, the track thickness and the track vertical offset to obtain the track characteristic vector.
In a second aspect, an embodiment of the present invention further provides a vehicle track abnormality determination apparatus, including:
the data acquisition module is used for acquiring track data of the target vehicle from the video image to be analyzed;
the feature extraction module is used for extracting track features of the target vehicle according to the track data;
the vector acquisition module is used for processing the track characteristics to obtain track characteristic vectors;
and the classification module is used for inputting the track feature vector into a classification model which is trained in advance, judging the track abnormal condition of the target vehicle and obtaining the abnormal condition of the target vehicle.
Optionally, the apparatus is further configured to train a classification model, in particular:
the data acquisition module acquires track data of a sample vehicle from a sample video image;
the feature extraction module obtains track features of the sample vehicle according to track data of the sample vehicle;
the vector acquisition module processes the track characteristics of the sample vehicle to obtain a track characteristic vector of the sample vehicle;
the classification module inputs the track feature vector of the sample vehicle and the track abnormal condition of the sample vehicle into a classification model, and trains the classification model.
Compared with the prior art, the invention has the following beneficial effects:
according to the vehicle track abnormality judging method and device, the classification model is obtained through training according to the track feature vector and track abnormality judging condition of the sample vehicle extracted from the sample video image, then the track feature vector of the target vehicle is obtained according to the track data of the target vehicle extracted from the video image to be analyzed, the track feature vector is input into the classification model trained in advance for judgment, the track abnormality condition judgment of the target vehicle is obtained, the track feature vector of the target vehicle is formed by combining a plurality of track features, various factors such as the position, the moving speed and the direction of the target vehicle are comprehensively considered, the accuracy of tracking abnormality detection is improved, the acquired track feature vector is processed and judged by adopting the classification model, the condition of judgment errors is reduced, the number of erroneous and violation penalties is reduced, the effectiveness of the penalty is improved, and the dispute of the penalty is avoided.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
In the description of the present invention, it should be noted that, the azimuth or positional relationship indicated by the terms "upper", "lower", etc. are based on the azimuth or positional relationship shown in the drawings, or the azimuth or positional relationship in which the inventive product is conventionally put in use, are merely for convenience of describing the present invention and simplifying the description, and are not indicative or implying that the apparatus or element to be referred to must have a specific azimuth, be configured and operated in a specific azimuth, and therefore, should not be construed as limiting the present invention.
In the description of the present invention, it should also be noted that, unless explicitly specified and limited otherwise, the terms "disposed", "connected" and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the prior art, there are generally two methods for determining the track abnormality of a tracked target vehicle. First, whether the track of the target vehicle is abnormal is determined by detecting the number of vehicles of the same type that are adjacent to the target vehicle. And secondly, selecting a proper calculation function and a proper statistical method according to the positions of the first two track points of the target vehicle, calculating the probability or probability range of the falling point of the subsequent track points of the target vehicle, and starting from the third track point, calculating the probability of each track point by the first two track points.
However, the applicant found that in the above-mentioned scheme, on one hand, if only the number of adjacent vehicles of the same type as the vehicle to be tested is used to determine whether the track is abnormal or not, or only the position information of the previous track point is used to determine the probability of the position of the subsequent track point, other factors, such as the motion state of the vehicle to be tested, the difference of the scene where the vehicle to be tested is located, etc., are ignored, and the accuracy of the determination result is likely to be affected due to the limited conditions of the determination; on the other hand, if the data processing is performed only in the original space of the trajectory data, the extraction effect of the trajectory feature is also reduced. Both of the above aspects may lead to erroneous determination of the target vehicle trajectory abnormality.
In order to overcome the drawbacks of the prior art described above, the applicant has studied to provide a solution as given in the following examples.
Referring to fig. 1, an embodiment of the present invention provides a vehicle track abnormality determination system 1. The vehicle track abnormality judging system 1 comprises a server 10 and at least one video acquisition end 11, wherein the server 10 and the video acquisition end 11 can be in communication connection through a wireless or wired network so as to realize data communication or interaction between the server 10 and the video acquisition end 11.
The video capturing end 11 may be an intelligent camera, and is configured to capture video image information of the tracked target vehicle, and transmit the captured video image to the server 10 through a network.
The server 10 may be, but is not limited to, a web server, an ftp (file transfer protocol ) server, etc., for processing and tracking anomaly determination of received information.
Fig. 2 is a schematic diagram of the server 10 shown in fig. 1. The server 10 includes a vehicle track abnormality determination device 100, a storage unit 101, a processing unit 102, and a communication unit 103.
The storage unit 101, the processing unit 102 and the communication unit 103 are electrically connected directly or indirectly to each other, so as to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The vehicle track abnormality determining device 100 includes at least one software function module that may be stored in the storage unit 101 in the form of software (software) or firmware (firmware) or cured in an Operating System (OS) of the server 10. The processing unit 102 is configured to execute executable modules stored in the storage unit 101, such as software function modules and computer programs included in the vehicle track abnormality determination apparatus 100.
The storage unit 101 may be, but is not limited to, a random access Memory (Random Access Memory, RAM), a Read Only Memory (ROM), a programmable Read Only Memory (Programmable Read-Only Memory, PROM), an erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), an electrically erasable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), etc. The processing unit 102 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; and may also be a Digital Signal Processor (DSP)), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, and the methods, steps, and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. The communication unit 103 is configured to establish a communication connection between the server 10 and the video capturing end 11 through a network, and send and receive data through the network.
Referring to fig. 3, an embodiment of the present invention further provides a vehicle track anomaly determination method applied to the server 10 shown in fig. 1 and 2. The specific flow of the vehicle track abnormality determination method is explained in detail below.
The vehicle track abnormality judging method comprises the following steps:
step S11, obtaining track data of the target vehicle from the video image to be analyzed.
In this embodiment, the video to be analyzed may be acquired by the video acquisition end 11. The step S11 specifically includes:
and calculating track point data of the target vehicle in each video image frame by adopting a tracking algorithm, and forming track data of the target vehicle in the video image to be analyzed by the track point data of all the video image frames, wherein the track point data comprise coordinates of track points.
In this embodiment, the tracking algorithm is used to identify and track the target vehicle, and may be a centroid tracking algorithm, a correlation tracking algorithm, an edge tracking algorithm, a phase correlation tracking algorithm, a scene locking algorithm, etc., which are not specifically limited herein. And using a tracking algorithm to treat the target vehicle as a track point in each frame of video image to be analyzed, acquiring track point coordinates of the target vehicle in the video image to be analyzed, and integrating the track point coordinates into the track data. For the convenience of calculation and analysis, as a preferred mode, the extending direction of the road where the target vehicle is located in the video image to be analyzed may be taken as the longitudinal axis of the sampling of the coordinate points, the direction perpendicular to the extending direction of the road is taken as the transverse axis, and the coordinates of the track points are calculated by (xi ,yi ) (i=1, 2, …, n), where i represents the coordinates of the track points of the i-th frame tracked by the target vehicle, and n represents the total number of tracked frames, also the total number of track points, of the target vehicle in the video image to be analyzed. It should be noted that this preferred method is equally applicable to the process of extracting the trajectory feature vector to be described below.
After obtaining the trajectory data of the target vehicle, the process proceeds to step S12.
And step S12, extracting track characteristics of the target vehicle according to the track data.
In this embodiment, the track data of the target vehicle is used to extract various track features, so that track information is more comprehensive, track information is more efficient to use, and accuracy of tracking anomaly detection is improved. The four track feature extraction processes of the track point distance difference histogram and the track point maximum distance difference, the track point angle histogram and the track angle maximum value, the track density and the track vertical offset provided in the embodiment of the present application will be described in detail below.
In this embodiment, the track feature includes a track point distance difference histogram and a track point maximum distance difference, and the extracting the track feature of the target vehicle according to the track data includes:
and calculating the distance difference between the track points of the adjacent video image frames sampled at equal intervals in the video image to be analyzed, obtaining a track point distance difference histogram according to the distance difference, and extracting the track point maximum distance difference according to the track point distance difference histogram.
Taking the ith frame of the video image to be analyzed as an example, when the interval of equidistant sampling is 0, the track points (xi ,yi )、(xi-1 ,yi-1 ) Distance difference Δz between the two electrodesi,i-1 The calculation is as follows:
if the sampling interval is not 0, the video image frames with intervals need to be ignored, and the coordinates of the track points in the previous video image frame adjacent to the video image frame obtained by sampling with equal intervals are marked as (x)i ,yi )、(xi-1 ,yi-1 ) And so on.
Starting from the start frame to the ith frame of the video image to be analyzed according to the distance difference deltazi,i-1 Extracting the track point distance difference histogram D2[ N ]2 ]And traversing the track point distance difference histogram D2[ N ]2 ]Obtaining the maximum distance difference D2 of the track pointsmax The process of (2) is as follows:
if Δzi,i-1 ≤D2max Or alternativelyΔzi,i-1 When D2 is =0max When=0, D2[0]Adding 1; when D2max When it is not equal to 0,1 is added.
If Δzi,i-1 >D2max And Δzi,i-1 Not equal to 0, then walk D2[ N ]2 ]When D2[ j ]]≠0(j=1,2,…,N2 -1) at the time of first takingThe value is +.>Then let D2[ j ]]Is 0; and D2[ N ]2 -1]A value of 1, D2max With a value of Deltazi,i-1 。
Wherein N is2 Is a preset value, and represents a track point distance difference histogram D2[ N ]2 ]For example, when N2 When=4, the trajectory point distance difference histogram includes an element D2[0 ]],D2[1],…,D2[3]. j is the number of elements of the current histogram in the traversal process, and j is less than or equal to N2 。
And counting the distance difference of the track points by adopting a track point distance difference histogram, and representing the maximum distance difference of the track points by using the last interval of the histogram, when the traversed current track point distance difference is larger than the previously counted maximum distance difference, replacing the original maximum distance difference value by using the current distance difference value, and sequentially migrating the rest elements to the corresponding front interval, thereby extracting the maximum distance difference of the track points from the distance differences between the track points of the adjacent video image frames sampled at equal intervals. The maximum distance difference of the track points is used for representing the maximum displacement of the target vehicle in a certain shorter time interval, namely representing the highest running speed of the target vehicle in the video image to be analyzed.
In this embodiment, the track feature further includes a track point angle histogram and a track angle maximum value, and the extracting the track feature of the target vehicle according to the track data further includes:
and calculating angle values among track points of three adjacent video image frames sampled at equal intervals in the video image to be analyzed, obtaining a track point angle histogram according to the angle values, and extracting a track angle maximum value according to the track point angle histogram.
Taking the ith frame of the video image to be analyzed as an example, when the interval of equidistant sampling is 0, the track points of the current frame of the video image to be analyzed, the previous frame and the previous two frames adjacent to the current frame, namely the track points (xi ,yi )、(xi-1 ,yi-1 )、(xi-2 ,yi-2 ). Wherein the connection line between the track points of the previous two frames and the previous frame is li-1,i-2 The line between the track points of the previous frame and the current frame is li,i-1 . Calculation connection line li,i-1 、li-1,i-2 The angle alpha between the two is as follows:
α=arccos(cosα)*180/π
in the above formula, cosα=1 when the molecular denominator is the same; when the numerator is different in number, cos alpha= -1.Δzi,i-1 To track the i, i-1 th frame trace point (xi ,yi )、(xi-1 ,yi-1 ) Distance difference, deltaz betweeni-1,i-2 To track the i-1, i-2 frame trace points (xi-1 ,yi-1 )、(xi-2 ,yi-2 ) Distance difference, deltaz betweeni,i-2 To track the i, i-2 th frame trace point (xi ,yi )、(xi-2 ,yi-2 ) The distance difference between the two electrodes is calculated as follows:
if the sampling interval is not 0, the video image frames at intervals need to be ignored, and the coordinates of the track points in the video images of the previous frame and the previous two frames adjacent to the video image frame obtained by sampling at equal intervals are marked as (x)i ,yi )、(xi-1 ,yi-1 )、(xi-2 ,yi-2 ) And so on.
Cut off the ith frame, the angle histogram of the track points A3[ N ]3 ]And a maximum value A3 of track angle between track points of three adjacent video image frames sampled at equal intervalsmax The extraction process is as follows:
if alpha is less than or equal to A3max Or α=0, when A3max When=0, A3[0]Adding 1; when A3 ismax When it is not equal to 0,1 is added.
If alpha > A3max And α is not equal to 0, then traverse A3[ N ]3 ]When A3[ k ]]≠0(k=1,2,…,N3 -1) at the time of first takingThe value is +.>Then let A3[ k ]]Is 0; and A3[ N ]3 -1]A value of 1, A3max The value is alpha.
The manner of traversing the track angle histogram and extracting the track angle maximum is similar to the manner of obtaining the track distance difference histogram and the track point maximum distance difference, and will not be described in detail herein. The track angle maximum value is used for representing the maximum angle between the connecting lines of the adjacent three track points of the target vehicle in a certain time interval, namely the maximum value of the direction change of the target vehicle in the video image to be analyzed.
In this embodiment, the track feature further includes track density, and the extracting the track feature of the target vehicle according to the track data further includes:
counting the total number of track points of the target vehicle in the video image to be analyzed;
starting from a first preset video image frame of the video image to be analyzed, calculating a first distance difference of track points of adjacent video image frames in a first preset direction and a second distance difference of track points of adjacent video image frames in a second preset direction, and counting the number of track points of which the sum of the first distance difference and the second distance difference is smaller than a preset threshold value;
and forming the track density by the total number of the track points, and the number of the track points of which the sum of the first distance difference and the second distance difference is smaller than a preset threshold value.
It should be noted that the words "first," "second," and the like should be construed broadly and merely to distinguish between the first and second differences in value, so as to facilitate describing the present invention and simplify the description, and not to indicate or imply that the objects referred to must have a certain relative importance, and thus should not be construed as limiting the invention. By analogy, the terms "third" and the like, which will be mentioned later, also function similarly and are not to be construed as limiting the invention.
From a first preset video image frame nth1 Starting, optionally, the first preset direction may be a vertical axis (y) direction, the second preset direction may be a horizontal axis (x) direction, and taking an i-th frame of the video image to be analyzed as an example, the track density dn is extracted4 ](N4 =2) as follows:
D[1]=i
in the above formula, M is a threshold, and optionally, m=2. Δxi,i-1 、Δyi,i-1 Distances between track points of the ith and i-1 th frames of the video image to be analyzed in the directions of a vertical axis (y) and a horizontal axis (x)The dispersion was calculated as follows:
Δxi,i-1 =xi -xi-1
Δyi,i-1 =yi -yi-1
the past statistical experience shows that relatively high probability of abnormal vehicle tracking track occurs at the last segment of the vehicle tracking video image, so that the last segment frames of the video image to be analyzed are also selected when the track density is extracted. If the sum of the first distance difference and the second distance difference between the track point of the current ith frame and the track point of the previous frame, i.e. the ith-1 frame, is smaller than a preset threshold, calculating the track point of the current ith frame as a normal track point and forming the track density together with the total number of the track points, wherein the displacement of the track point of the current ith frame in the video image to be analyzed is in a normal range.
In this embodiment, the track feature further includes a track vertical offset, and the extracting the track feature of the target vehicle according to the track data further includes:
calculating a third distance difference of track points of adjacent video image frames in the video image to be analyzed in a third preset direction from a second preset video image frame of the target vehicle, wherein the third preset direction is the extending direction of a road where the target vehicle is located;
and calculating the sum of all third distance differences of the track points from the second preset video image frame to the last frame to obtain the track vertical offset.
From the second preset video image frame nth2 Starting with the i (i=n-nth2 +1,n-nth2 For example, +2, …, n) frames, the trace point is extracted at the last (n-n)th2 ) Track vertical offset Δy of framesum The calculation formula is as follows:
and taking the total displacement of the target vehicle in the end number frames of the video to be analyzed in the vertical axis direction as a track characteristic, namely one of conditions for judging whether the track is abnormal or not, so as to detect whether the running speed of the target vehicle is too high or not when the target vehicle is about to leave the monitoring range of the video acquisition end 11.
After obtaining the track characteristics of the target vehicle, the process proceeds to step S13.
And step S13, processing the track characteristics to obtain track characteristic vectors.
In this embodiment, the step S13 specifically includes the following steps:
and normalizing one or more characteristics of the track point distance histogram, the track point maximum distance difference, the track point angle histogram, the track angle maximum value, the track thickness and the track vertical offset to obtain the track characteristic vector.
In this embodiment, the normalization process may be specifically expressed by the following formula:
T[N2 +N3 +2]=D2max
T[N2 +N3 +3]=A3max
T[N2 +N3 +4]=ΔYsum
wherein r and s are parameters, ND2 、NA3 Respectively calculate D2[ N ]2 ]、A3[N3 ]The number of trace points used.
After normalization of the track feature is completed, the track feature vector is obtained, and then step S14 is performed.
And S14, inputting the track feature vector into a classification model which is trained in advance, and judging the track abnormal condition of the target vehicle to obtain the abnormal condition of the target vehicle.
In this embodiment, the classification Model may alternatively be an SVM (Support Vector Machine ) Model, or may be a Markov Model (Markov Model). By adopting the classification model to detect the track feature vector, misjudgment on information can be reduced, the accuracy of the obtained vehicle track tracking abnormality judgment result is higher, and the efficiency of rule violation penalty is improved.
In this embodiment, the classification model may be obtained through training, and a specific training process is described below.
Referring to fig. 4, the steps for training a classification model in the vehicle track anomaly determination method provided by the embodiment of the present invention specifically include:
step S21, obtaining track data of the sample vehicle from the sample video image.
In this embodiment, the sample video image may be a video material collected by the video collecting device in the past.
And S22, obtaining the track characteristics of the sample vehicle according to the track data of the sample vehicle.
In this embodiment, the track features of the sample vehicle also include a track point distance histogram and a track point maximum distance difference, a track angle histogram and a track angle maximum value, a track density and a track vertical offset, and the extraction process is similar to the step S12, and reference may be made to the description of the step S12.
And S23, processing the track characteristics of the sample vehicle to obtain a track characteristic vector of the sample vehicle.
In this embodiment, the track feature of the sample vehicle is also normalized to obtain a track feature vector of the sample vehicle, and the normalization process is similar to the step S13, and reference may be made to the description of the step S13.
And step S24, inputting the track feature vector of the sample vehicle and the track abnormal condition of the sample vehicle into a classification model, and training the classification model.
In this embodiment, since the track abnormality of the sample vehicle is already known, the track feature vector of the sample vehicle and the track abnormality of the sample vehicle are input into the classification model, and a trained classification model is obtained through training, where the trained classification model is used to process and determine the track feature vector obtained from the video image to be analyzed, so as to obtain the track abnormality determination result of the target vehicle.
If the functions of the above-described method steps are implemented in the form of software functional modules and sold or used as a separate product, they may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in the form of a software product stored in a storage medium, comprising several instructions for performing all or part of the steps of the method described in the various embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Referring to fig. 5, an embodiment of the present invention further provides a vehicle track anomaly determination device 100, and since specific functions of each module of the device have been described in the above method steps, each function module of the vehicle track anomaly determination device 100 is briefly described below. The device comprises:
the data acquisition module 1001 is configured to obtain track data of a target vehicle from a video image to be analyzed.
And a feature extraction module 1002, configured to extract a track feature of the target vehicle according to the track data.
And the vector acquisition module 1003 is configured to process the track feature to obtain a track feature vector.
And the classification module 1004 inputs the track feature vector into a classification model trained in advance, and judges the track abnormal condition of the target vehicle to obtain the abnormal condition of the target vehicle.
In this embodiment, the apparatus is further configured to train a classification model, specifically:
the data acquisition module 1001 acquires track data of a sample vehicle from a sample video image;
the feature extraction module 1002 obtains a track feature of the sample vehicle according to the track data of the sample vehicle;
the vector obtaining module 1003 processes the track feature of the sample vehicle to obtain a track feature vector of the sample vehicle;
the classification module 1004 inputs the trajectory feature vector of the sample vehicle and the trajectory abnormality of the sample vehicle into a classification model, and trains the classification model.
In summary, the method and the device for determining the track abnormality of the vehicle according to the embodiment of the invention are used for obtaining the classification model through training according to the track feature vector and the track abnormality determination condition of the sample vehicle extracted from the sample video image, obtaining the track feature vector of the target vehicle according to the track data of the target vehicle extracted from the video image to be analyzed, inputting the track feature vector into the classification model trained in advance for determination, obtaining the track abnormality determination of the target vehicle, combining a plurality of track features into the track feature vector of the target vehicle, comprehensively considering a plurality of factors such as the position, the moving speed and the direction of the target vehicle, improving the accuracy of tracking abnormality detection, processing and determining the acquired track feature vector by adopting the classification model, reducing the condition of determination errors, reducing the number of erroneous and penalizing, improving the effectiveness of the penalty and avoiding the penalty dispute.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.