Movatterモバイル変換


[0]ホーム

URL:


CN106887010B - Ground moving object detection method based on high-level scene information - Google Patents

Ground moving object detection method based on high-level scene information
Download PDF

Info

Publication number
CN106887010B
CN106887010BCN201710023810.2ACN201710023810ACN106887010BCN 106887010 BCN106887010 BCN 106887010BCN 201710023810 ACN201710023810 ACN 201710023810ACN 106887010 BCN106887010 BCN 106887010B
Authority
CN
China
Prior art keywords
target
formula
optical flow
frame
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710023810.2A
Other languages
Chinese (zh)
Other versions
CN106887010A (en
Inventor
杨涛
任强
张艳宁
刘小飞
段文成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Shenzhen Institute of Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Shenzhen Institute of Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University, Shenzhen Institute of Northwestern Polytechnical UniversityfiledCriticalNorthwestern Polytechnical University
Priority to CN201710023810.2ApriorityCriticalpatent/CN106887010B/en
Publication of CN106887010ApublicationCriticalpatent/CN106887010A/en
Application grantedgrantedCritical
Publication of CN106887010BpublicationCriticalpatent/CN106887010B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The invention discloses a kind of Ground moving target detection methods based on high-rise scene information, there is technical issues that for solving existing multi-target detection method.Technical solution is to extract preliminary testing result using frame difference method first;The light stream vector for calculating each point again judges the position in its next frame after being superimposed the target of present frame with its light stream vector, realize the association to target, eliminate a part of false-alarm;Motor point and background dot finally are judged using the high layer information fundamental matrix F of scene, eliminate a large amount of false-alarm.

Description

Translated fromChinese
基于高层场景信息的地面运动目标检测方法Ground moving object detection method based on high-level scene information

技术领域technical field

本发明涉及一种多目标检测方法,特别是涉及一种基于高层场景信息的地面运动目标检测方法。The invention relates to a multi-target detection method, in particular to a ground moving target detection method based on high-level scene information.

背景技术Background technique

多目标检测是计算机视觉领域中一项具有挑战性的任务。传统的运动检测大多基于帧差实现。然而由于场景的实际的三维场景,帧差会有视差,引起大量虚警。文献“GoyalH.Frame Differencing with Simulink model for Moving Object Detection[J].International Journal of Advanced Research in Computer Engineering&Technology,2013,2(1).”公开了一种多目标检测方法(帧间差分法)。该方法假设场景中背景是水平的,通过差分,高出地平面的部分会检测出来。由于该方法没有考虑到仿射变换中场景的三维结构导致的视差,因此伴有大量的虚警,对于实际的三维场景不适用且包含少量噪声。Multiple object detection is a challenging task in the field of computer vision. Traditional motion detection is mostly implemented based on frame difference. However, due to the actual 3D scene of the scene, the frame difference will have parallax, causing a large number of false alarms. The document "Goyal H. Frame Differencing with Simulink model for Moving Object Detection [J]. International Journal of Advanced Research in Computer Engineering & Technology, 2013, 2(1)." discloses a multi-target detection method (inter-frame difference method). This method assumes that the background in the scene is horizontal, and through the difference, the part above the ground level will be detected. Since this method does not take into account the parallax caused by the three-dimensional structure of the scene in the affine transformation, it is accompanied by a large number of false alarms, which is not applicable to the actual three-dimensional scene and contains a small amount of noise.

发明内容Contents of the invention

为了克服现有多目标检测方法存在虚警的不足,本发明提供一种基于高层场景信息的地面运动目标检测方法。该方法首先采用帧差法提取初步的检测结果;再计算每个点的光流矢量,将当前帧的目标与其光流矢量叠加后判断它下一帧中的位置,实现对目标的关联,去除了一部分虚警;最后利用场景的高层信息基本矩阵F判断运动点和背景点,去除了大量的虚警。In order to overcome the deficiency of false alarms in existing multi-target detection methods, the present invention provides a ground moving target detection method based on high-level scene information. This method first uses the frame difference method to extract the preliminary detection results; then calculates the optical flow vector of each point, and judges its position in the next frame after superimposing the target in the current frame with its optical flow vector, so as to realize the association of the target and remove the part of the false alarms; finally, the basic matrix F of the high-level information of the scene is used to judge the moving points and background points, and a large number of false alarms are removed.

本发明解决其技术问题所采用的技术方案是:一种基于高层场景信息的地面运动目标检测方法,其特点是包括以下步骤:The technical solution adopted by the present invention to solve the technical problems is: a method for detecting ground moving objects based on high-level scene information, which is characterized in that it comprises the following steps:

步骤一、帧差。Step 1, frame difference.

针对不同高度的场景,使用不同的图像配准算法。对于高空拍摄的视频序列由于其满足稀疏光流的三个假设,故使用Lucas-Kanade稀疏光流实现图像特征点匹配;对于低空拍摄的图像由于其不满足光流的假设条件,故使用sobel算子提取图像特征点。通过稀疏光流或sobel算子实现图像匹配,最后采用RANSAC估计两幅图像间的仿射变换,具体如下:For scenes with different heights, different image registration algorithms are used. For the video sequence shot at high altitude, because it satisfies the three assumptions of sparse optical flow, the Lucas-Kanade sparse optical flow is used to achieve image feature point matching; for the image shot at low altitude, because it does not meet the assumptions of optical flow, the sobel algorithm is used. sub-extract image feature points. Image matching is achieved through sparse optical flow or sobel operator, and finally RANSAC is used to estimate the affine transformation between two images, as follows:

式中,Cp和Cn是前一帧和下一帧的特征点的像素坐标,C'p和C'n是转换后前一帧和下一帧的像素坐标,Ak-1和Ak+1是2*3的仿射变换矩阵。前一帧和下一帧仿射变换的图像分别差分当前帧,得到初步检测结果,如下:In the formula, Cp and Cn are the pixel coordinates of the feature points of the previous frame and the next frame, C'p and C'n are the pixel coordinates of the previous frame and the next frame after conversion, Ak-1 and Ak+1 is a 2*3 affine transformation matrix. The affine-transformed images of the previous frame and the next frame are respectively differentiated from the current frame, and the preliminary detection results are obtained, as follows:

Dk=||Sk-S'k-1||∪||Sk-S'k+1|| (2)Dk =||Sk -S'k-1 ||∪||Sk -S'k+1 || (2)

式中,Dk表示差分后图像,S'k-1和S'k+1是当前帧和下一帧仿射变换后的结果,Sk是当前帧图像。最后对差分图像进行二值化,阈值设为40。In the formula, Dk represents the image after difference, S'k-1 and S'k+1 are the results of affine transformation between the current frame and the next frame, and Sk is the current frame image. Finally, the difference image is binarized, and the threshold is set to 40.

步骤二、光流关联。Step 2: Optical flow correlation.

光流估计:经典光流法主要基于亮度恒定、像素微小运动和空间一致性的假设。在连续视频中,假设物体对应的像素灰度值不因运动发生变化,故有:Optical flow estimation: The classical optical flow method is mainly based on the assumptions of constant brightness, small pixel motion and spatial consistency. In continuous video, it is assumed that the pixel gray value corresponding to the object does not change due to motion, so there are:

I(x,y,t)=I(x+dx,y+dy,t+dt) (3)I(x,y,t)=I(x+dx,y+dy,t+dt) (3)

式中,x和y为横纵坐标,I为图像灰度值。上式泰勒展开有:In the formula, x and y are horizontal and vertical coordinates, and I is the gray value of the image. The above Taylor expansion is:

Ixdx+Iydy+Itdt=0 (4)Ix dx+Iy dy+It dt=0 (4)

式中,Ix Iy It分别表示对应方向的梯度。转换为向量形式如下:In the formula, Ix Iy It respectively represent the gradient of the corresponding direction. Convert to vector form as follows:

式中,u和v分别为对应方向上的光流大小。上式可表示为:In the formula, u and v are the magnitude of the optical flow in the corresponding direction, respectively. The above formula can be expressed as:

Ad=b (6)Ad = b (6)

采用最小二乘法求解的最小值得到光流失量d,如下:Solved by the method of least squares The minimum value of the light loss d is obtained as follows:

d=(ATA)-1ATb (7)d=(AT A)-1 AT b (7)

预测关联:首先,假设每个点在第k-1帧的坐标是(xk-1,yk-1),根据步骤一的光流估计策略,获得一个光流矢量V,则目标在下一帧的位置预测如下:Prediction association: First, assuming that the coordinates of each point in the k-1th frame are (xk-1 , yk-1 ), according to the optical flow estimation strategy in step 1, an optical flow vector V is obtained, and the target is in the next The position of the frame is predicted as follows:

式中,是预测坐标,(Vx,Vy)是光流运动矢量。In the formula, is the prediction coordinate, (Vx , Vy ) is the optical flow motion vector.

其次,对初次帧差获得的所有目标,通过光流预测它在下一帧的位置。对每个目标而言,如果它有足够多的点与下一帧中的某个目标相匹配,则他们为同一目标,决策函数定义如下:Second, for all objects obtained by the first frame difference, its position in the next frame is predicted by optical flow. For each target, if it has enough points to match a target in the next frame, they are the same target, and the decision function is defined as follows:

式中,是二次帧差检测的目标,Sk是点的状态方程,接下来计算目标匹配的置信度:In the formula, is the target of the secondary frame difference detection, Sk is the state equation of the point, and then calculate the confidence of the target match:

式中,α是属于目标的点的总数,两个目标关联概率为:where α is the total number of points belonging to the target, and the associated probability of two targets is:

αρ=α/β (11)αρ = α/β (11)

式中,β是目标内所有点的数量总和,接受两个目标关联的概率设为ε=0.8,如果两个目标互相关联,则它们之间的关联关系定义为:In the formula, β is the sum of the number of all points in the target, and the probability of accepting the association between two targets is set to ε = 0.8, if two targets and are related to each other, then the relationship between them is defined as:

对每个目标,定义一个关联集合A={Am,...,An},式中Am表示对每个关联集合而言,只有当目标的关联集合的数目大于设定阈值时,将它作为候选目标。For each target, define an association set A={Am ,...,An }, where Am represents For each association set, only when the number of association sets of the object is greater than the set threshold, it is regarded as a candidate object.

步骤三、基于高层信息的运动检测。Step 3, motion detection based on high-level information.

采用sobel算子提取图像的特征点,根据最短距离完成对图像特征点的匹配。x=(x,y)和x'=(x',y')是图像中一对匹配点,将其转换为单应向量X=[x,y,1]和X'=[x',y',1]T,它们满足:The sobel operator is used to extract the feature points of the image, and the matching of the image feature points is completed according to the shortest distance. x=(x,y) and x'=(x',y') are a pair of matching points in the image, which are converted into homography vectors X=[x,y,1] and X'=[x', y',1]T , they satisfy:

X'TFX=0 (13)X'T FX=0 (13)

式中,F是为3*3的基本矩阵。使用归一化8点算法通过求解线性方程组来得到基本矩阵F。在实际的计算过程中匹配的特征点不会严格满足上式,因此,使用Sampson矫正,通过计算匹配的矫正量判断内外点,Samposon置信度K定义如下:In the formula, F is a 3*3 fundamental matrix. The fundamental matrix F is obtained by solving a system of linear equations using the normalized 8-point algorithm. In the actual calculation process, the matching feature points will not strictly satisfy the above formula. Therefore, using Sampson correction, the internal and external points are judged by calculating the matching correction amount. The Sampson confidence degree K is defined as follows:

K=X'TFX/M (14)K=X'T FX/M (14)

式中,(FX)1=f11x+f12y+f13,(x,y)是X的像素点的坐标。类比确定(FX)2,(FTX')1,(FTX')2,从而确定了每个点的Sampson置信度。In the formula, (FX)1 =f11 x+f12 y+f13 , and (x, y) are the coordinates of the pixel point of X. Determine (FX)2 , (FT X')1 , (FT X')2 by analogy, and thus determine the Sampson confidence level of each point.

外点矩阵H和W分别为图像高和宽,其定义如下:outlier matrix H and W are image height and width respectively, which are defined as follows:

每个候选目标的内外点比率计算如下:The ratio of inside and outside points for each candidate object is calculated as follows:

式中,是一个候选目标,是候选者的所有点的数量总和,运动目标决策函数M定义如下:In the formula, is a candidate target, is the sum of the number of all points of the candidate, and the moving target decision function M is defined as follows:

式中,η是外点的概率阈值,只有当候选者的外点比率大于η时,才能判定其为一个运动的目标。In the formula, η is the probability threshold of outliers, only when the ratio of outliers of the candidate is greater than η, it can be judged as a moving target.

本发明的有益效果是:该方法首先采用帧差法提取初步的检测结果;再计算每个点的光流矢量,将当前帧的目标与其光流矢量叠加后判断它下一帧中的位置,实现对目标的关联,去除了一部分虚警;最后利用场景的高层信息基本矩阵F判断运动点和背景点,去除了大量的虚警。The beneficial effects of the present invention are: the method first adopts the frame difference method to extract preliminary detection results; then calculates the optical flow vector of each point, and judges its position in the next frame after superimposing the object of the current frame and its optical flow vector, Realize the association of the target, and remove some false alarms; finally, use the high-level information basic matrix F of the scene to judge the moving points and background points, and remove a large number of false alarms.

下面结合具体实施方式对本发明作详细说明。The present invention will be described in detail below in combination with specific embodiments.

具体实施方式Detailed ways

本发明基于高层场景信息的地面运动目标检测方法具体步骤如下:The specific steps of the ground moving target detection method based on high-level scene information of the present invention are as follows:

1、帧差。1. Frame difference.

针对不同高度的场景,使用不同的图像配准算法。对于高空拍摄的视频序列由于其满足稀疏光流的三个假设,故使用Lucas-Kanade稀疏光流实现图像特征点匹配;对于低空拍摄的图像由于其不满足光流的假设条件,故使用sobel算子提取图像特征点。通过稀疏光流或sobel算子实现图像匹配,最后采用RANSAC估计两幅图像间的仿射变换,具体如下:For scenes with different heights, different image registration algorithms are used. For the video sequence shot at high altitude, because it satisfies the three assumptions of sparse optical flow, the Lucas-Kanade sparse optical flow is used to achieve image feature point matching; for the image shot at low altitude, because it does not meet the assumptions of optical flow, the sobel algorithm is used. sub-extract image feature points. Image matching is achieved through sparse optical flow or sobel operator, and finally RANSAC is used to estimate the affine transformation between two images, as follows:

式中,Cp和Cn是前一帧和下一帧的特征点的像素坐标,C'p和C'n是转换后的像素坐标,其中Ak-1和Ak+1是2*3的仿射变换矩阵。前一帧和下一帧仿射变换的图像分别差分当前帧,得到初步检测结果,如下:In the formula, Cp and Cn are the pixel coordinates of the feature points of the previous frame and the next frame, C'p and C'n are the converted pixel coordinates, where Ak-1 and Ak+1 are 2* 3's affine transformation matrix. The affine-transformed images of the previous frame and the next frame are respectively differentiated from the current frame, and the preliminary detection results are obtained, as follows:

Dk=||Sk-S'k-1||∪||Sk-S'k+1|| (20)Dk =||Sk -S'k-1 ||∪||Sk -S'k+1 || (20)

式中,Dk表示差分后图像,S'k-1和S'k+1是当前帧和下一帧仿射变换后的结果,Sk是当前帧图像。最后对差分图像进行二值化,阈值设为40。In the formula, Dk represents the image after difference, S'k-1 and S'k+1 are the results of affine transformation between the current frame and the next frame, and Sk is the current frame image. Finally, the difference image is binarized, and the threshold is set to 40.

2、光流关联。2. Optical flow correlation.

光流关联主要为两部分:光流估计,预测关联。Optical flow association is mainly divided into two parts: optical flow estimation and prediction association.

1)光流估计:经典光流法主要基于亮度恒定、像素微小运动和空间一致性的假设。在连续视频中,假设物体对应的像素灰度值不因运动发生变化,故有:1) Optical flow estimation: The classical optical flow method is mainly based on the assumptions of constant brightness, small pixel motion and spatial consistency. In continuous video, it is assumed that the pixel gray value corresponding to the object does not change due to motion, so there are:

I(x,y,t)=I(x+dx,y+dy,t+dt) (21)I(x,y,t)=I(x+dx,y+dy,t+dt) (21)

式中,x和y为横纵坐标,I为图像灰度值。上式泰勒展开有:In the formula, x and y are horizontal and vertical coordinates, and I is the gray value of the image. The above Taylor expansion is:

Ixdx+Iydy+Itdt=0 (22)Ix dx+Iy dy+It dt=0 (22)

式中,Ix Iy It分别表示对应方向的梯度。转换为向量形式如下:In the formula, Ix Iy It respectively represent the gradient of the corresponding direction. Convert to vector form as follows:

式中,u和v分别为对应方向上的光流大小。上式可表示为:In the formula, u and v are the magnitude of the optical flow in the corresponding direction, respectively. The above formula can be expressed as:

Ad=b (24)Ad = b (24)

采用最小二乘法求解的最小值得到光流失量d,如下:Solved by the method of least squares The minimum value of the light loss d is obtained as follows:

d=(ATA)-1ATb (25)d=(AT A)-1 AT b (25)

2)预测关联:首先,假设每个点在第k-1帧的坐标是(xk-1,yk-1),根据步骤一的光流估计策略,获得一个光流矢量V,则目标在下一帧的位置预测如下:2) Prediction association: First, assuming that the coordinates of each point in the k-1th frame are (xk-1 , yk-1 ), according to the optical flow estimation strategy in step 1, an optical flow vector V is obtained, then the target The position prediction in the next frame is as follows:

式中,是预测坐标,(Vx,Vy)是光流运动矢量。In the formula, is the prediction coordinate, (Vx , Vy ) is the optical flow motion vector.

其次,对初次帧差获得的所有目标,通过光流预测它在下一帧的位置。对每个目标而言,如果它有足够多的点与下一帧中的某个目标相匹配,则他们为同一目标,决策函数定义如下:Second, for all objects obtained by the first frame difference, its position in the next frame is predicted by optical flow. For each target, if it has enough points to match a target in the next frame, they are the same target, and the decision function is defined as follows:

式中,是二次帧差检测的目标,Sk是点的状态方程,接下来计算目标匹配的置信度:In the formula, is the target of the secondary frame difference detection, Sk is the state equation of the point, and then calculate the confidence of the target match:

式中,α是属于目标的点的总数,两个目标关联概率为:where α is the total number of points belonging to the target, and the associated probability of two targets is:

αρ=α/β (29)αρ = α/β (29)

式中,β是目标内所有点的数量总和,接受两个目标关联的概率设为ε=0.8,如果两个目标互相关联,则它们之间的关联关系定义为:In the formula, β is the sum of the number of all points in the target, and the probability of accepting the association between two targets is set to ε = 0.8, if two targets and are related to each other, then the relationship between them is defined as:

对每个目标,定义一个关联集合A={Am,...,An},式中Am表示对每个关联集合而言,只有当目标的关联集合的数目大于设定阈值时,将它作为候选目标。For each target, define an association set A={Am ,...,An }, where Am represents For each association set, only when the number of association sets of the object is greater than the set threshold, it is regarded as a candidate object.

3、基于高层信息的运动检测。3. Motion detection based on high-level information.

采用sobel算子提取图像的特征点,根据最短距离完成对图像特征点的匹配。x=(x,y)和x'=(x',y')是图像中一对匹配点,将其转换为单应向量X=[x,y,1]和X'=[x',y',1]T,它们满足:The sobel operator is used to extract the feature points of the image, and the matching of the image feature points is completed according to the shortest distance. x=(x,y) and x'=(x',y') are a pair of matching points in the image, which are converted into homography vectors X=[x,y,1] and X'=[x', y',1]T , they satisfy:

X'TFX=0 (31)X'T FX=0 (31)

式中,F是为3*3的基本矩阵。使用归一化8点算法通过求解线性方程组来得到基本矩阵F。在实际的计算过程中匹配的特征点不会严格满足上式,因此,使用Sampson矫正,通过计算匹配的矫正量判断内外点,Samposon置信度K定义如下:In the formula, F is a 3*3 fundamental matrix. The fundamental matrix F is obtained by solving a system of linear equations using the normalized 8-point algorithm. In the actual calculation process, the matching feature points will not strictly satisfy the above formula. Therefore, using Sampson correction, the internal and external points are judged by calculating the matching correction amount. The Sampson confidence degree K is defined as follows:

K=X'TFX/M (32)K=X'T FX/M (32)

式中,(FX)1=f11x+f12y+f13,(x,y)是X的像素点的坐标。类比确定(FX)2,(FTX')1,(FTX')2,从而确定了每个点的Sampson置信度,In the formula, (FX)1 =f11 x+f12 y+f13 , and (x, y) are the coordinates of the pixel point of X. Determine (FX)2 , (FT X')1 , (FT X')2 by analogy, thus determining the Sampson confidence of each point,

外点矩阵H和W分别为图像高和宽,其定义如下:outlier matrix H and W are image height and width respectively, which are defined as follows:

每个候选目标的内外点比率计算如下:The ratio of inside and outside points for each candidate object is calculated as follows:

式中,是一个候选目标,是候选者的所有点的数量总和,运动目标决策函数M定义如下:In the formula, is a candidate target, is the sum of the number of all points of the candidate, and the moving target decision function M is defined as follows:

式中,η是外点的概率阈值,只有当候选者的外点比率大于η时,才能判定其为一个运动的目标。In the formula, η is the probability threshold of outliers, only when the ratio of outliers of the candidate is greater than η, it can be judged as a moving target.

Claims (1)

Translated fromChinese
1.一种基于高层场景信息的地面运动目标检测方法,其特征在于包括以下步骤:1. a ground moving target detection method based on high-level scene information, is characterized in that comprising the following steps:步骤一、帧差;Step 1, frame difference;针对不同高度的场景,使用不同的图像配准算法;对于高空拍摄的视频序列由于其满足稀疏光流的三个假设,故使用Lucas-Kanade稀疏光流实现图像特征点匹配;对于低空拍摄的图像由于其不满足光流的假设条件,故使用sobel算子提取图像特征点;通过稀疏光流或sobel算子实现图像匹配,最后采用RANSAC估计两幅图像间的仿射变换,具体如下:For scenes at different heights, use different image registration algorithms; for video sequences shot at high altitudes, because they satisfy the three assumptions of sparse optical flow, use Lucas-Kanade sparse optical flow to achieve image feature point matching; for images shot at low altitudes Because it does not meet the assumptions of optical flow, the sobel operator is used to extract image feature points; image matching is realized through sparse optical flow or sobel operator, and finally the affine transformation between two images is estimated by RANSAC, as follows:式中,Cp和Cn是前一帧和下一帧的特征点的像素坐标,C'p和C'n是转换后前一帧和下一帧的像素坐标,Ak-1和Ak+1是2*3的仿射变换矩阵;前一帧和下一帧仿射变换的图像分别差分当前帧,得到初步检测结果,如下:In the formula, Cp and Cn are the pixel coordinates of the feature points of the previous frame and the next frame, C'p and C'n are the pixel coordinates of the previous frame and the next frame after conversion, Ak-1 and Ak+1 is a 2*3 affine transformation matrix; the affine transformed images of the previous frame and the next frame are respectively differentiated from the current frame, and the preliminary detection results are obtained, as follows:Dk=||Sk-S'k-1||∪||Sk-S'k+1|| (2)Dk =||Sk -S'k-1 ||∪||Sk -S'k+1 || (2)式中,Dk表示差分后图像,S'k-1和S'k+1是当前帧和下一帧仿射变换后的结果,Sk是当前帧图像;最后对差分图像进行二值化,阈值设为40;In the formula, Dk represents the image after difference, S'k-1 and S'k+1 are the results of affine transformation between the current frame and the next frame, Sk is the current frame image; finally, the difference image is binarized , the threshold is set to 40;步骤二、光流关联;Step 2, optical flow correlation;光流估计:经典光流法基于亮度恒定、像素微小运动和空间一致性的假设;在连续视频中,假设物体对应的像素灰度值不因运动发生变化,故有:Optical flow estimation: The classic optical flow method is based on the assumptions of constant brightness, small pixel motion and spatial consistency; in continuous video, it is assumed that the pixel gray value corresponding to the object does not change due to motion, so there are:I(x,y,t)=I(x+dx,y+dy,t+dt) (3)I(x,y,t)=I(x+dx,y+dy,t+dt) (3)式中,x和y为横纵坐标,I为图像灰度值;上式泰勒展开有:In the formula, x and y are the horizontal and vertical coordinates, and I is the gray value of the image; the above formula Taylor expansion has:Ixdx+Iydy+Itdt=0 (4)Ix dx+Iy dy+It dt=0 (4)式中,Ix、Iy、It分别表示对应方向的梯度;转换为向量形式如下:In the formula, Ix , Iy , and It represent the gradients in the corresponding directions respectively; the conversion to vector form is as follows:式中,u和v分别为对应方向上的光流大小;将上式表示为:In the formula, u and v are the size of the optical flow in the corresponding direction; the above formula is expressed as:Ad=b (6)Ad = b (6)采用最小二乘法求解的最小值得到光流失量d,如下:Solved by the method of least squares The minimum value of the light loss d is obtained as follows:d=(ATA)-1ATb (7)d=(AT A)-1 AT b (7)预测关联:首先,假设每个点在第k-1帧的坐标是(xk-1,yk-1),根据步骤一的光流估计策略,获得一个光流矢量V,则目标在下一帧的位置预测如下:Prediction association: First, assuming that the coordinates of each point in the k-1th frame are (xk-1 , yk-1 ), according to the optical flow estimation strategy in step 1, an optical flow vector V is obtained, and the target is in the next The position of the frame is predicted as follows:式中,是预测坐标,(Vx,Vy)是光流运动矢量;In the formula, is the prediction coordinate, (Vx , Vy ) is the optical flow motion vector;其次,对初次帧差获得的所有目标,通过光流预测它在下一帧的位置;对每个目标而言,如果它有足够多的点与下一帧中的某个目标相匹配,则他们为同一目标,决策函数定义如下:Secondly, for all the targets obtained by the first frame difference, predict its position in the next frame through optical flow; for each target, if it has enough points that match a target in the next frame, then they For the same objective, the decision function is defined as follows:式中,是二次帧差检测的目标,Sk是点的状态方程,接下来计算目标匹配的置信度:In the formula, is the target of the secondary frame difference detection, Sk is the state equation of the point, and then calculate the confidence of the target match:式中,α是属于目标的点的总数,两个目标关联概率为:where α is the total number of points belonging to the target, and the associated probability of two targets is:αρ=α/β (11)αρ = α/β (11)式中,β是目标内所有点的数量总和,接受两个目标关联的概率设为ε=0.8,如果两个目标互相关联,则它们之间的关联关系定义为:In the formula, β is the sum of the number of all points in the target, and the probability of accepting the association between two targets is set to ε = 0.8, if two targets and are related to each other, then the relationship between them is defined as:对每个目标,定义一个关联集合A={Am,...,An},式中Am表示对每个关联集合而言,只有当目标的关联集合的数目大于设定阈值时,将它作为候选目标;For each target, define an association set A={Am ,...,An }, where Am represents For each association set, only when the number of association sets of the target is greater than the set threshold, it is used as a candidate target;步骤三、基于高层信息的运动检测;Step 3, motion detection based on high-level information;采用sobel算子提取图像的特征点,根据最短距离完成对图像特征点的匹配;x=(x,y)和x'=(x',y')是图像中一对匹配点,将其转换为单应向量X=[x,y,1]和X'=[x',y',1]T,它们满足:Use the sobel operator to extract the feature points of the image, and complete the matching of the image feature points according to the shortest distance; x=(x,y) and x'=(x',y') are a pair of matching points in the image, which are converted For homography vectors X=[x,y,1] and X'=[x',y',1]T , they satisfy:X'TFX=0 (13)X'T FX=0 (13)式中,F是为3*3的基本矩阵;使用归一化8点算法通过求解线性方程组来得到基本矩阵F;在实际的计算过程中匹配的特征点不会严格满足上式,因此,使用Sampson矫正,通过计算匹配的矫正量判断内外点,Sampson置信度K定义如下:In the formula, F is the basic matrix of 3*3; use the normalized 8-point algorithm to obtain the basic matrix F by solving the linear equation system; in the actual calculation process, the matched feature points will not strictly satisfy the above formula, therefore, Use Sampson correction to judge the inside and outside points by calculating the matching correction amount. The Sampson confidence degree K is defined as follows:K=X'TFX/M (14)K=X'T FX/M (14)式中,(FX)1=f11x+f12y+f13,(x,y)是X的像素点的坐标;类比确定(FX)2,(FTX')1,(FTX')2,从而确定了每个点的Sampson置信度;In the formula, (FX)1 =f11 x+f12 y+f13 , (x, y) is the coordinate of the pixel point of X; analogy determines (FX)2 ,(FT X')1 ,(FT X')2 , thus determining the Sampson confidence level of each point;外点矩阵和W分别为图像高和宽,其定义如下:outlier matrix and W are image height and width respectively, which are defined as follows:每个候选目标的内外点比率计算如下:The ratio of inside and outside points for each candidate object is calculated as follows:式中,是二次帧差检测的目标,是一个候选目标,是候选者的所有点的数量总和,运动目标决策函数M定义如下:In the formula, is the target of secondary frame difference detection, and is a candidate target, is the sum of the number of all points of the candidate, and the moving target decision function M is defined as follows:式中,η是外点的概率阈值,只有当候选者的外点比率大于η时,才能判定其为一个运动的目标。In the formula, η is the probability threshold of outliers, only when the ratio of outliers of the candidate is greater than η, it can be judged as a moving target.
CN201710023810.2A2017-01-132017-01-13 Ground moving object detection method based on high-level scene informationActiveCN106887010B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201710023810.2ACN106887010B (en)2017-01-132017-01-13 Ground moving object detection method based on high-level scene information

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201710023810.2ACN106887010B (en)2017-01-132017-01-13 Ground moving object detection method based on high-level scene information

Publications (2)

Publication NumberPublication Date
CN106887010A CN106887010A (en)2017-06-23
CN106887010Btrue CN106887010B (en)2019-09-24

Family

ID=59176289

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201710023810.2AActiveCN106887010B (en)2017-01-132017-01-13 Ground moving object detection method based on high-level scene information

Country Status (1)

CountryLink
CN (1)CN106887010B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109472824B (en)*2017-09-072021-04-30北京京东尚科信息技术有限公司Article position change detection method and device, storage medium, and electronic device
CN108830885B (en)*2018-05-312021-12-07北京空间飞行器总体设计部Detection false alarm suppression method based on multi-directional differential residual energy correlation
CN109087322B (en)*2018-07-182021-07-27华中科技大学 A method for detecting small moving objects in aerial images
CN109035306B (en)*2018-09-122020-12-15首都师范大学 Method and device for automatic detection of moving target
CN109740558B (en)*2019-01-102022-11-18吉林大学 A Moving Target Detection Method Based on Improved Optical Flow Method
CN110555868A (en)*2019-05-312019-12-10南京航空航天大学method for detecting small moving target under complex ground background
CN111950484A (en)*2020-08-182020-11-17青岛聚好联科技有限公司High-altitude parabolic information analysis method and electronic equipment
CN119873549A (en)*2024-12-302025-04-25广州市顺亿机电安装工程有限公司Intelligent elevator control system and data processing method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101179713A (en)*2007-11-022008-05-14北京工业大学 A single moving target detection method in complex background
CN101908214B (en)*2010-08-102012-05-23长安大学 Moving Object Detection Method Based on Neighborhood Correlation Background Reconstruction
CN102307274B (en)*2011-08-312013-01-02南京南自信息技术有限公司Motion detection method based on edge detection and frame difference
CN103679172B (en)*2013-10-102017-02-08南京理工大学Method for detecting long-distance ground moving object via rotary infrared detector
CN105761279B (en)*2016-02-182019-05-24西北工业大学 Target Tracking Method Based on Trajectory Segmentation and Splicing

Also Published As

Publication numberPublication date
CN106887010A (en)2017-06-23

Similar Documents

PublicationPublication DateTitle
CN106887010B (en) Ground moving object detection method based on high-level scene information
CN102456225B (en)Video monitoring system and moving target detecting and tracking method thereof
Hu et al.Moving object detection and tracking from video captured by moving camera
CN108986037B (en)Monocular vision odometer positioning method and positioning system based on semi-direct method
CN104935832B (en)For the video keying method with depth information
CN103997624B (en)Overlapping domains dual camera Target Tracking System and method
CN105260719B (en) Railway platform crossing detection method
CN109064484B (en)Crowd movement behavior identification method based on fusion of subgroup component division and momentum characteristics
Meng et al.A robust registration method for UAV thermal infrared and visible images taken by dual-cameras
CN110321937B (en)Motion human body tracking method combining fast-RCNN with Kalman filtering
CN103077539A (en)Moving object tracking method under complicated background and sheltering condition
WangResearch of vehicle speed detection algorithm in video surveillance
CN108830925B (en)Three-dimensional digital modeling method based on spherical screen video stream
CN104200492B (en)Video object automatic detection tracking of taking photo by plane based on profile constraints
CN105809716B (en) A Foreground Extraction Method Fused with Superpixels and 3D Self-Organizing Background Subtraction
CN106952286A (en) Object Segmentation Method Based on Motion Saliency Map and Optical Flow Vector Analysis in Dynamic Background
CN102129695A (en)Target tracking method based on modeling of occluder under condition of having occlusion
CN102497505A (en)Multi-ball machine linkage target tracking method and system based on improved Meanshift algorithm
CN101923717A (en)Method for accurately tracking characteristic points of quick movement target
CN114419102B (en) A Multi-target Tracking and Detection Method Based on Frame Difference Temporal Motion Information
CN108320295B (en) A Speckle Noise Detection Method for Old Movies Based on Adaptive Thresholding Spatio-temporal Information
CN104123733A (en)Motion detection and error rate reduction method based on block matching
KR102614895B1 (en)Real-time object tracking system and method in moving camera video
Eudes et al.Fast odometry integration in local bundle adjustment-based visual slam
CN105279767B (en)Train arrives at a station the recognition methods of state

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp