Disclosure of Invention
In order to solve the defects in the prior art, the invention aims to provide a multi-target real-time tracking method integrating visual optical flow characteristic point tracking and motion trend estimation, which aims to solve the problem of complex scene tracking, improve tracking precision and algorithm running efficiency and enable a tracking result to be smoother.
According to a first aspect of the present invention, there is provided a multi-target real-time tracking method integrating visual optical flow feature point tracking and motion trend estimation, comprising:
 Step 10, acquiring an image Mi of the current ith frame in real time through image acquisition equipment, detecting targets in Mi by using a YOLO target detection algorithm to obtain rectangular frame positions of the targets, and adding each target of the ith frame into a target detection result list Objectsi of the ith frame.
Step 20, if the i frame is the first frame, configuring a new ID for each object in Objectsi, setting object first parameter tracked _frames to 1, setting object second parameter patched_frames to 0, assigning Objectsi to the object tracking result list of the current i frameJump to step 70 and if the i-th frame is not the first frame, proceed to step 30.
Step 30, target tracking result list for the i-1 th frameIs processed by a rectangular frame motion trend estimation algorithm to obtain a target motion state prediction position list of the current frameProcessing by using a rectangular frame area image optical flow characteristic point tracking algorithm to obtain a target optical flow prediction position list of the current frame
Step 40. ComparisonAndThe position of a rectangular frame of the same object j as the ID, calculating the overlapping degree IoU1 of a first rectangular frame, and presetting a threshold according to the first overlapping degreeJudging if (3)The target tracking is successful, the target is added into the position tracking prediction result list of the target of the i-1 frame in the i frameAnd 1 to the patched_frames of the target, ifThe target tracking fails, tracked _frames for the target is set to 0.
Step 50, associating each object in Objectsi with each otherThe matching degree is calculated according to each target in the image region, wherein the matching degree comprises a second overlapping degree matching degree IoU2, a normalized center point distance Dist, a difference degree Diff and an image region similarity SimM, ioU2 is used as the weight of the two target matching degrees if the matching degree meets a super-parameter threshold, the weight corresponding to the two target matching degrees is set to 0 if the matching degree does not meet the super-parameter threshold, and a matching degree weight matrix is constructed based on the weight.
Step 60, obtaining the matching relation between each object of the i-1 frame and the i frame by using a KM optimal matching algorithm according to a matching degree weight matrix, for the successfully matched object, assigning the ID of the object in the i-1 frame to the matched object of the i frame, tracked _frames plus 1, patched_frames being set to 0, for the unmatched object in the i frame, assigning a new ID, tracked _frames being set to 1, patched_frames being set to 0, for the unmatched object in the i-1 frame, the ID is unchanged, patched_frames plus 1, each object being addedA list.
Step 70, pairThe target in (a) is analyzed and judged, and if patched_frames are larger than the maximum target prediction frame number Pn, the target is selected fromRecovering its ID, if patched_frames are less than the maximum target predicted frame number Pn and tracked _frames are greater than the minimum target tracking frame number Tn, then the target is removed fromRemoved and addedA list.
Step 80. ForFiltering by using a rectangular frame smoothing filtering algorithm, outputting a filtered result, completing tracking of the current ith frame target, and performing filtering on the current ith frame targetAdding the target in the buffer list for tracking the next frame, and returning to the step 10 to process the next frame image.
Further, the multi-target real-time tracking method integrating visual optical flow characteristic point tracking and motion trend estimation is characterized in that a rectangular frame of the target comprises an upper left corner coordinate (x, y) of a rectangular frame area of an object in an image, a width w and a height h.
Furthermore, the multi-target real-time tracking method for integrating visual optical flow characteristic point tracking and motion trend estimation is characterized in that the rectangular frame motion trend estimation algorithm comprises the following steps ofRectangular frame position of middle object jTracking and predicting by using a Kalman filter, predicting the rectangular frame position of the current frame target according to the historical rectangular frame position of the target, and calculating to obtain the motion state prediction position of the target jAnd add to the list
The rectangular frame region image optical flow characteristic point tracking algorithm comprises respectively constructing a gray image pyramid for Mi and Mi-1, and generating a gray image pyramidRectangular frame position of middle object jUniformly selecting K coordinate points in Mi-1, calculating the positions of the K coordinate points in Mi by using LK optical flow point matching algorithm based on gray image pyramid, counting the position offset of all corresponding points between Mi and Mi-1, taking the average value as the optical flow tracking offset of the target jOptical flow tracking result of calculation target jIs thatAnd add to the list
Further, the multi-target real-time tracking method for fusing visual optical flow characteristic point tracking and motion trend estimation is characterized in that for a rectangular frame Ra(xa,ya,wa,ha) and a rectangular frame Rb(xb,yb,wb,hb), the overlapping degree of the rectangular frames is IoU =the intersection area of the two rectangular frames/the union area of the two rectangular frames.
The normalized center point distance is: Wherein: Center point coordinates of the rectangular boxes Ra and Rb, respectively, Wm represents the image width and Hm represents the image height.
The degree of difference is :Diff=abs(log(wa/wb))+abs(log(ha/hb)),wa,ha,wb,hb, which is the width and height of the rectangular boxes Ra and Rb, respectively.
The calculation of the similarity SimM of the image area comprises separating RGB channels of two rectangular frame area images Ma and Mb, counting color histogram, and normalizingWherein S{a,b} denotes the pixel area of the rectangular frame area image Ma or Mb,Representing the pixel value of the image Ma or Mb at the (i, j) position. For color histogram vectorsAndAnd calculating the similarity SimM =va`Vb/(||Va||×||Vb | of the image areas where the two rectangular frames are positioned by adopting a cosine similarity formula, wherein Va`Vb represents two vector point multiplication, and |va | and |vb | respectively represent the modes of the two vectors.
Further, the multi-target real-time tracking method for integrating visual optical flow characteristic point tracking and motion trend estimation is characterized in that the matching degree meeting the super-parameter threshold value comprises the following steps: Dist < threshdist、Diff<threshdiff、SimM>threshsim, whereFor the second overlap preset threshold, threshdist is the center point distance preset threshold, threshdiff is the difference preset threshold, and threshsim is the similarity preset threshold.
Further, the multi-target real-time tracking method for integrating visual optical flow characteristic point tracking and motion trend estimation is characterized in that a rectangular frame smoothing filtering algorithm is as follows:
 The positions of rectangular frames of the target j from the i-2 frame to the i frame are respectively marked as Ri-2,j(xi-2,j,yi-2,j,wi-2,j,hi-2,j)、Ri-1,j(xi-1,j,yi-1,j,wi-1,j,hi-1,j)、Ri,j(xi,j,yi,j,wi,j,hi,j);, and the positions of rectangular frames after filtering from the i-3 frame to the i-1 frame are respectively marked as R'i-3,j(x'i-3,j,y'i-3,j,w'i-3,j,h'i-3,j)、R'i-2,j(x'i-2,j,y'i-2,j,w'i-2,j,h'i-2,j)、R,i-1,j(x'i-1,j,y'i-1,j,w'i-1,j,h,i-1,j).
The four parameters of the rectangular frame are respectively filtered by a 2-order Butterworth low-pass filter to be:
 wherein, (a1,a2,a3) and (b1,b2,b3) are butterworth low-pass filter parameters, and are calculated by setting sampling frequency and cut-off frequency. And obtaining a target rectangular frame filtering result R'i,j(x'i,j,y'i,j,w'i,j,w'i,j).
Furthermore, the multi-target real-time tracking method integrating visual light flow characteristic point tracking and motion trend estimation is characterized in that the image acquisition equipment is arranged on a vehicle, a sensing area covers the front of the vehicle, and targets comprise vehicles, pedestrians and non-motor vehicles.
According to a second aspect of the present invention, there is provided a computer device characterized by comprising:
 A memory for storing instructions, and
And the processor is used for calling the instructions stored in the memory to execute the multi-target real-time tracking method integrating the visual optical flow characteristic point tracking and the motion trend estimation in the first aspect.
According to a third aspect of the present invention, there is provided a computer-readable storage medium storing instructions that, when executed by a processor, perform the multi-objective real-time tracking method of the first aspect that fuses visual optical flow feature point tracking with motion trend estimation.
Compared with the prior art, the technical scheme of the invention has at least the following beneficial effects:
 The position prediction of the previous frame target in the current frame is realized by combining the Kalman state filter and the optical flow tracking algorithm, and the problem that a single predictor cannot cover a complex scene is solved.
Aiming at the problem of calculating the similarity of the targets of the front frame and the rear frame, the method evaluates the targets from two dimensions, firstly, an evaluation algorithm is respectively designed by the overlapping degree of the rectangular frames, the distance between the center points and the size of the rectangular frames, secondly, a cosine similarity evaluation algorithm based on a color histogram is put forward on the image similarity of the rectangular frame area, so that the similarity of the targets is calculated more comprehensively, and the target tracking precision is improved.
In the aspect of target bipartite graph matching, compared with a Hungary matching algorithm commonly used in the industry, the method can only realize maximum matching, and the KM matching algorithm adopted by the invention further considers matching weights on the basis of the Hungary algorithm to realize optimal matching.
The target tracking algorithm realized by the invention provides a rectangular frame filtering algorithm based on a Butterworth low-pass filter on the basis of the rectangular frame jitter problem of the target, so that the target tracking result is smoother.
The invention finally achieves the operation efficiency of 30 frames per second in the embedded system, and the tracking target reaches 64 at most, and the tracking performance has advantages.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention. In addition, the technical features of the embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
Algorithm marking description:
 Initializing the target number array, marking it as ListID, and adopting queue structure for list.
The current frame is marked as i, the current frame image is marked as Mi, the current target detection result is marked as Objectsi, and the current target tracking result is marked as
The previous frame image is marked as Mi-1, and the previous frame target tracking result is marked asThe prediction result of the motion state of the target in the previous frame is recorded asThe predicted result of the target optical flow of the previous frame is recorded asThe result of the predicted position of the previous frame target in the current frame is recorded as
Current target tracking result usageAnd (3) representing.
Each tracked target contains parameters of a target label id, a target tracking frame number tracked _frames, a target prediction frame number patched_frames, and a target rectangular frame coordinate R (x, y, w, h).
The parameter Tn represents the minimum target tracking frame number, and the parameter Pn represents the maximum target prediction frame number.
As shown in fig. 1, in one embodiment, the multi-target real-time tracking method for fusing visual optical flow feature point tracking and motion trend estimation provided by the present invention comprises the following steps:
 1) The system is built, wherein a camera is arranged on a front windshield of a vehicle, covers a forward sensing area, is connected with a controller through a video transmission line and supplies power to the whole system;
 2) Initializing a system, namely starting the system, loading a driver, performing self-checking on hardware functions, and alarming and exiting the system if the hardware fails, and entering the next step if the self-checking of the system is normal;
 3) The algorithm collects the camera image of the current frame i in real time, which is marked as Mi, and uses the YOLO object detection algorithm to detect the object in the image, including vehicles, pedestrians and non-motor vehicles, which are marked as Objectsi;
 4) If the current frame is the first frame, the target detection result is assigned to the current tracking resultStep 10), if not, entering the next step;
 5) Tracking results for the i-1 st frameObtaining the motion state prediction position of each target in the current frame by using a rectangular frame motion trend estimation algorithm (see (4) rectangular frame motion trend estimation algorithm in an algorithm key module), and marking as
6) Tracking results for the i-1 st frameObtaining the optical flow prediction position of each target in the current frame by using a rectangular frame area image optical flow characteristic point tracking algorithm (see a 5 th partial rectangular frame area image optical flow characteristic point tracking algorithm in a 5 th node algorithm key module), and marking as
7) ComparisonAndCalculating the overlapping degree IoU of the rectangular frame positions of the corresponding targets in the list, ifThen add the target toList, and add 1 to the patched_frames parameter number of each target, otherwise the target tracking fails;
 8) Tracking the predicted result with the i-1 th frame for each object in the i-th frame detection result ObjectsiCalculating the matching degree of the rectangular frames (see (1) rectangular frame matching degree calculation in the algorithm key module) and the similarity SimM of the image areas where the rectangular frames are located (see (2) rectangular frame area image similarity calculation in the algorithm key module), if the matching degree between the two rectangular frames meets the following conditions: Dist < threshdist、Diff<threshdiff、SimM>threshsim, building a matrix based on IoU as a weight;
 9) Using weight matrix, executing optimal binary matching algorithm of rectangular frame (see (6) in key module of algorithm) to obtain correspondent relationship, for the successfully matched target, assigning correspondent target id in i-1 th frame to the target of current i-th frame, adding 1 to target parameter tracked _frames, setting 0 to patched_frames, for the unmatched target in i-th frame, distributing new id in target label array ListID, setting tracked _frames to 1, setting 0 to patched_frames, for the unmatched target in i-1 th frame, making target label unchanged, adding 1 to patched_frames, adding three targets into the systemA list;
 10 Pair of (a) to (b)Analyzing and judging the target in the list, if the patched_frames are larger than the maximum target predicted frame number Pn, removing the target from the list and recovering the id to the target label list ListID, if the patched_frames are smaller than the maximum target predicted frame number Pn and the tracked _frames are larger than the minimum target tracking frame number Tn, removing the targetAnd add intoList of willThe target in the database is cached and used for follow-up target tracking calculation;
 11 Pair of (a) to (b)Each target in the list is filtered by a rectangular frame smoothing filter algorithm (see (3) rectangular frame smoothing filter algorithm in an algorithm key module), jitter of a rectangular frame is reduced, a filtered result is added into a cache list for tracking a next frame, and meanwhile, a target tracking result is output to an intelligent driving decision module for decision making, and current frame target tracking is completed.
12 Returning to step 3).
The algorithm key module involved in the above steps comprises the following parts:
 (1) Rectangular frame matching degree calculation: rectangular frame Ra(xa,ya,wa,ha) and rectangular frame Rb(xb,yb,wb,hb) three calculation methods are adopted:
 Overlap IoU = intersection area of two rectangular boxes/union area of two rectangular boxes.
Normalized center point distanceWherein: Center point coordinates of the rectangular boxes Ra and Rb, respectively, Wm represents the image width and Hm represents the image height.
Difference diff=abs (log (wa/wb))+abs(log(ha/hb)).
(2) Calculating the similarity SimM of the rectangular frame region image:
 RGB channels of the rectangular frame area images Ma and Mb are separated, color histograms are counted, normalization processing is carried out, and the calculation formula is as follows:
 Where S{a,b} denotes the pixel area of the rectangular frame area image Ma or Mb,Representing the pixel value of the image Ma or Mb at the (i, j) position.
Correspondingly forming color histogram vectors with the same two dimensionsAndAnd calculating the similarity of two rectangular images by adopting a cosine similarity formula:
SimM=Va·Vb/(||Va||×||Vb||)
 Where Va·Vb represents two vector point multiplies, the |va | and the |vb | represent the modes of the two vectors, respectively.
(3) And (3) a rectangular frame smoothing filter algorithm, namely, in order to enable the rectangular frame tracked by the target to be more stable, the position of the tracked target rectangular frame is adjusted through the smoothing filter algorithm. The current frame is denoted as i, and smoothing filtering is performed on the object denoted by reference numeral j as follows:
 The positions of rectangular frames of the targets from the i-2 frame to the i-1 frame are respectively marked as Ri-2,j(xi-2,j,yi-2,j,wi-2,j,hi-2,j)、Ri-1,j(xi-1,j,yi-1,j,wi-1,j,hi-1,j)、Ri,j(xi,j,yi,j,wi,j,hi,j);, and the positions of rectangular frames after filtering from the i-3 frame to the i-1 frame are respectively marked as R'i-3,j(x'i-3,j,y'i-3,j,w'i-3,j,h'i-3,j)、R'i-2,j(x'i-2,j,y'i-2,j,w'i-2,j,h'i-2,j)、R,i-1,j(x'i-1,j,y'i-1,j,w'i-1,j,h,i-1,j).
The four parameters of the rectangular frame are respectively filtered by a 2-order Butterworth low-pass filter, and the calculation formula is as follows:
 wherein, (a1,a2,a3) and (b1,b2,b3) are butterworth low-pass filter parameters, and are calculated by setting sampling frequency and cut-off frequency.
And returning and storing the target rectangular box filtering result R,i,j(x'i,j,y'i,j,w'i,j,w'i,j).
(4) In order to improve the accuracy of target tracking, the invention predicts the rectangular frame position of each target, adopts a 4-dimensional Kalman state filter, comprises (x, y, w, h) of the rectangular frame, and predicts the rectangular frame position of the target of the current frame according to the historical rectangular frame position of the target.
(5) The invention constructs a gray pyramid for the front and back frames of images, calculates the position of the rectangular frame area image of the previous frame in the current frame by LK optical flow tracking algorithm, thereby realizing optical flow tracking, and the algorithm steps are as follows:
 And respectively constructing a gray pyramid for the front frame image and the rear frame image.
For last frame target listK points are uniformly selected from the rectangular frame area of each target.
The positions of these K points in the current frame image are calculated using the LK optical flow point matching algorithm.
And deleting the points with failed matching, and respectively calculating the transverse offset and the longitudinal offset of the front frame and the rear frame for the rest points.
And calculating the rectangular frame position of the target in the current frame.
Adding optical flow tracking results to a computer systemList and return.
(6) The invention converts the target matching problem between the rectangular frame list A and the rectangular frame list B into the bipartite graph matching problem, and calculates the optimal matching result by adopting a KM optimal matching algorithm.
First, the IoU overlap between the rectangular boxes of the two lists is calculated.
And constructing a weight matrix by taking the overlapping degree as the weight.
And calculating the weight matrix through a KM algorithm to obtain a matching relationship between the rectangular frames of the list A and the list B under the condition of optimal matching.
And the rectangle box successfully matched is considered as the same target, and a matching result is returned.
As shown in FIG. 2, 3 targets exist in the list A, 4 targets exist in the list B, the overlapping degree IoU is calculated for each target rectangular frame in the list in pairs, the value range is (0-1), a weight matrix is obtained, and then a KM optimal binary matching algorithm is used to obtain a matching result. Through operation, A1, A2 and A3 in FIG. 2 are successfully matched with B1, B2 and B4 in the list B respectively.
Specifically, in some embodiments, the invention is implemented by the steps of:
 1) The system is built by installing a camera on a front windshield of a vehicle, covering a forward sensing area, connecting with a controller through a video transmission line and supplying power to the whole system.
2) Initializing the system, namely starting the system, loading the drive, performing self-checking on hardware functions, alarming and exiting the system if the hardware fails, and entering the next step if the self-checking of the system is normal.
3) The camera image of the current frame i is acquired in real time and is marked as Mi, a YOLO target detection algorithm is used for detecting targets in the image, each target position comprises pixel coordinates x and y of the upper left corner of a rectangular frame area of the object in the image, the width and the height w and h, the target category comprises vehicles, pedestrians and non-motor vehicles, and the identification result is marked as Objectsi.
4) If the current frame i is the first frame, initializing a target index queue ListID with a queue length of 128 and a queue data range of 0-127, assigning an ID to each target in the detection result Objectsi by using the first-in first-out principle for data input and output in the queue, setting a target parameter tracked _frames to 1, setting patched_frames to 0, and putting the targets into a listTo step 12), if the current frame i is not the first frame, proceeding to the next step.
5) Fetching the i-1 frame target tracking result from the buffer memoryRectangular frame position for each tracking target jTracking and predicting by using Kalman filter to obtain the predicted position of each target in the motion state of current frameAnd add to the listThis module is named rectangular box motion trend estimation algorithm.
6) Taking out the i-1 th frame image Mi-1 in the buffer memory, constructing a gray level image pyramid for the images Mi and Mi-1 respectively, and tracking the i-1 th frameRectangular frame position of each tracking target j in (a)K coordinate points are respectively taken in the row direction and the column direction, the positions of the K2 points in the current frame image are calculated by using an LK optical flow point matching algorithm based on a gray level image pyramid, and the position offset taking and the distance offset taking values between the corresponding points are recorded as valuesThe optical flow tracking result of each tracking target in the current frame is calculated by adopting the following formulaAnd add to the listThe module is named as a rectangular frame area image optical flow characteristic point tracking algorithm:
 7) ComparisonAndCalculating the overlap IoU between the rectangular box positions of the corresponding targets in the list ifThen add the target toList and add 1 to the patched_frames parameter value for each target, otherwise the target tracking fails, tracked _frames parameter is set to 0.
8) Tracking the predicted result with the i-1 th frame for each target in the current frame i detection result ObjectsiThe matching degree is calculated by the rectangular frame of each target, and comprises the overlapping degree IoU, the normalized center point distance Dist and the difference degree Diff, and the calculation formula is as follows:
 iou = intersection area of two rectangular boxes/union area of two rectangular boxes.
b.Wherein: Center point coordinates of the rectangular boxes Ra and Rb, respectively, Wm represents the image width and Hm represents the image height.
c.Diff=abs(log(wa/wb))+abs(log(ha/hb))。
9) Recomputing Objectsi each separately associated with each objectThe similarity SimM of the image area of the target in the method is calculated by two steps:
 a. The RGB channels of the two rectangular frame area images Ma and Mb are separated, the color histogram is counted, the normalization processing is performed, and the calculation formula is as follows:
 Where S{a,b} denotes the pixel area of the rectangular frame area image Ma or Mb,Representing the pixel value of the image Ma or Mb at the (i, j) position.
B. Correspondingly forming color histogram vectors with the same two dimensionsAndAnd calculating the similarity of two rectangular images by adopting a cosine similarity formula:
SimM=Va·Vb/(||Va||×||Vb||)
 Where Va·Vb represents two vector point multiplies, the |va | and the |vb | represent the modes of the two vectors, respectively.
10 If Objectsi andThe matching degree between the targets in (a) meets the super-parameter threshold: Dist < threshdist、Diff<threshdiff、SimM>threshsim, ioU is used as a weight construction matrix, and if the super-parameter threshold is not met, the weight in the matrix is set to 0.
11 For the weight matrix, calculate Objectsi and using KM optimal bipartite matching algorithmFor the successfully matched targets, the i-1 frame is used for the corresponding relation between the targetsAssigning a corresponding object id to the object of Objectsi in the current i-th frame, adding 1 to the object parameter tracked _frames, setting 0 to patched_frames, for non-matching Objects in Objectsi, assigning a new id in object index array ListID, setting 1 to the object parameter tracked _frames, setting 0 to patched_frames, for non-matching Objects in ObjectsiAdding 1 to patched_frames, adding the three targets to the target listA list.
12 Emptying)List of pairs ofAnalyzing and judging the target in the list, if the patched_frames are larger than the maximum target predicted frame number Pn, removing the target from the list and recycling the id of the target to the target label queue ListID, and if the patched_frames are smaller than the maximum target predicted frame number Pn and the tracked _frames are larger than the minimum target tracking frame number Tn, removing the targetAnd add intoAndList if patched_frames are less than the maximum target predicted frame number Pn and tracked _frames are less than the minimum target tracking frame number Tn, then the target is addedAnd the list is used for subsequent target tracking calculation.
13 Pair of (a) to (b)Each object in the list is filtered by using a rectangular frame smoothing filter algorithm, so that the jitter of the rectangular frame is reduced, and the calculation steps are as follows:
 a. the original positions of rectangular frames of targets marked j from the i-2 frame to the i frame are respectively marked Ri-2,j(xi-2,j,yi-2,j,wi-2,j,hi-2,j)、Ri-1,j(xi-1,j,yi-1,j,wi-1,j,hi-1,j)、Ri,j(xi,j,yi,j,wi,j,hi,j);, and the positions of rectangular frames after filtering from the i-3 frame to the i-1 frame are respectively marked R,i-3,j(x'i-3,j,y'i-3,j,w'i-3,j,h,i-3,j)、R,i-2,j(x'i-2,j,y'i-2,j,w'i-2,j,h,i-2,j)、R,i-1,j(x'i-1,j,y'i-1,j,w'i-1,j,h,i-1,j).
B. The four parameters of the rectangular frame are respectively filtered by a 2-order Butterworth low-pass filter, and the calculation formula is as follows:
 wherein, (a1,a2,a3) and (b1,b2,b3) are butterworth low-pass filter parameters, and are calculated by setting sampling frequency and cut-off frequency.
C. Returning and saving the target rectangular frame filtering result R,i,j(x'i,j,y'i,j,w'i,j,w'i,j, and updatingIs included in the frame.
14 Will) beThe target tracking list is added into a buffer memory for tracking the next frame and is output to an intelligent driving decision module for decision making, and the current frame target tracking is completed.
15 Returning to step 3).
It is to be understood that the invention is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the invention is limited only by the appended claims.