技术领域Technical Field
本发明涉及图像处理技术领域,具体为一种抗遮挡的刚体目标快速匹配及位姿估计方法及系统。The present invention relates to the technical field of image processing, and in particular to a method and system for fast matching and position and posture estimation of rigid targets with anti-occlusion.
背景技术Background technique
机器视觉技术已被广泛应用于工件定位,并且已有多种基于视觉的定位技术被提出并应用于工业生产中。随着深度学习的迅速发展,其在定位检测方面取得不错的效果,利用机器视觉技术完成工业生产中的定位任务已经逐渐成熟。但是工业应用注重算法的稳定性与实时性,深度学习算法在面对全新工件数据集的缺少可能难以满足工业要求。Machine vision technology has been widely used in workpiece positioning, and a variety of vision-based positioning technologies have been proposed and applied in industrial production. With the rapid development of deep learning, it has achieved good results in positioning detection, and the use of machine vision technology to complete positioning tasks in industrial production has gradually matured. However, industrial applications focus on the stability and real-time performance of algorithms, and deep learning algorithms may be difficult to meet industrial requirements in the face of the lack of new workpiece data sets.
传统模板匹配在图像处理应用以及计算机视觉中有着基础且重要的地位,被广泛应用于工业自动化领域中的目标定位以及抓取等任务。在众多的目标定位任务中基于灰度的模板匹配比较经典,如绝对差分之和(SAD)、差分平方和(SSD)、归一化互相关(NCC)等,这些方法对模板图像中每个像素点做相似性匹配计算,匹配准确性可以得到保证,但灰度模板匹配容易受到光照变化、噪声、遮挡等因素的影响。此外,在面对目标存在旋转或者尺度变化时,定位速度无法满足工业生产的实时性要求。Traditional template matching plays a fundamental and important role in image processing applications and computer vision, and is widely used in tasks such as target positioning and grasping in the field of industrial automation. Among the many target positioning tasks, grayscale-based template matching is relatively classic, such as the sum of absolute differences (SAD), the sum of squared differences (SSD), and the normalized cross correlation (NCC). These methods perform similarity matching calculations on each pixel in the template image, and the matching accuracy can be guaranteed, but grayscale template matching is easily affected by factors such as illumination changes, noise, and occlusion. In addition, when the target is rotated or scaled, the positioning speed cannot meet the real-time requirements of industrial production.
基于形状信息的模板匹配方法对于噪声、非线性光照变化有较强的鲁棒性,并且有着较快的匹配速度。但形状信息并不是具有旋转、尺度变化不变性的匹配方法,如果想要获取多角度、多尺度变化的目标,需要对模板图像提取不同角度、不同尺度下的特征信息,并建立模板库用于遍历匹配不同姿态下的目标,这极大了的增加了匹配的耗时,并且基于形状的模板匹配很容易受遮挡干扰的影响。Template matching methods based on shape information are highly robust to noise and nonlinear illumination changes, and have a faster matching speed. However, shape information is not a matching method that is invariant to rotation and scale changes. If you want to obtain targets with multi-angle and multi-scale changes, you need to extract feature information at different angles and scales from the template image, and establish a template library for traversing and matching targets in different postures, which greatly increases the time consumption of matching, and shape-based template matching is easily affected by occlusion interference.
广义霍夫变换方法用于在图像中寻找特定形状的目标物体,基本思想是将图像中的边缘特征点映射到一个参数空间,然后通过在参数空间中寻找峰值位置来确定目标形状的位置和姿态。广义霍夫变换的优点是能够检测任意形状的对象,对目标的旋转、缩放和平移具有较好的鲁棒性,在目标检测、图像识别、模式匹配等领域有广泛应用。但是在实际的工业应用中,其采用不具有旋转、尺度变化不变性的梯度方向作为索引建立参考表,需要四维的参数空间进行投票统计,因为参数维度太高会导致存储空间大以及计算复杂度高的弊端,并且对于大规模图像或复杂形状的检测可能需要较长的处理时间。The generalized Hough transform method is used to find target objects of specific shapes in images. The basic idea is to map the edge feature points in the image to a parameter space, and then determine the position and posture of the target shape by finding the peak position in the parameter space. The advantage of the generalized Hough transform is that it can detect objects of arbitrary shapes and has good robustness to the rotation, scaling and translation of the target. It is widely used in target detection, image recognition, pattern matching and other fields. However, in actual industrial applications, it uses the gradient direction that is not invariant to rotation and scale changes as an index to establish a reference table, and requires a four-dimensional parameter space for voting statistics. Because the parameter dimension is too high, it will lead to the disadvantages of large storage space and high computational complexity, and the detection of large-scale images or complex shapes may require a long processing time.
发明内容Summary of the invention
为解决上述的问题,本发明提供了一种抗遮挡的刚体目标快速匹配及位姿估计方法,包括:To solve the above problems, the present invention provides a method for fast matching and pose estimation of rigid targets with anti-occlusion, comprising:
S1、提取模板图像中刚体目标的边缘点,通过边缘点构建点对;基于梯度方向差对已构建的点对进行筛选过滤,获取点对的特征信息建立参考表,所述特征信息包括子点内角、基点与子点沿梯度方向的边长比例;S1, extracting edge points of rigid objects in the template image, and constructing point pairs through the edge points; filtering the constructed point pairs based on the gradient direction difference, obtaining feature information of the point pairs to establish a reference table, wherein the feature information includes the inner angle of the sub-point, and the ratio of the side length of the base point to the sub-point along the gradient direction;
S2、获取目标图像进行边缘点提取,并基于网格化筛选,在包含边缘点的每个网格中,筛选梯度幅值最大的边缘点作为点对的基点;根据基点的梯度方向建立梯度方向查找表,根据基点及梯度方向差进行点对子点的筛选;S2, obtaining the target image to extract edge points, and based on grid screening, in each grid containing edge points, screening the edge points with the largest gradient amplitude as the base points of the point pairs; establishing a gradient direction lookup table according to the gradient direction of the base points, and screening the sub-points of the point pairs according to the base points and the gradient direction differences;
S3、将筛选后的基点与子点分别构建点对,获取目标图像点对的特征信息,并根据特征信息的子点内角计算角度索引,根据基点与子点沿梯度方向边长比例计算尺度索引;基于已建立的参考表与角度索引、尺度索引,获取目标图像相对模板图像的旋转角度与放缩尺度,得到刚体目标的参考点位置;S3, construct point pairs with the filtered base points and sub-points respectively, obtain feature information of the target image point pairs, and calculate the angle index according to the sub-point inner angle of the feature information, and calculate the scale index according to the ratio of the side lengths of the base point and the sub-point along the gradient direction; based on the established reference table and the angle index and scale index, obtain the rotation angle and scaling scale of the target image relative to the template image, and obtain the reference point position of the rigid target;
S4、对参考点位置进行投票,确定目标图像中刚体目标的边缘点与中心位置,对刚体目标的位姿进行匹配。S4. Vote for the reference point position, determine the edge point and center position of the rigid target in the target image, and match the position and posture of the rigid target.
所述S1中采用梯度方向量化区间对已构建的点对进行筛选过滤,具体方法为,In S1, the gradient direction quantization interval is used to filter the constructed point pairs. The specific method is:
分别计算点对中基点的梯度方向与子点的梯度方向,并按照逆时针方向计算基点与子点的梯度方向差为:Calculate the gradient direction of the base point and the gradient direction of the sub-point in the point pair respectively, and calculate the gradient direction difference between the base point and the sub-point in the counterclockwise direction:
, ,
其中,mafst表示基点的梯度方向;mascd表示子点的梯度方向;masub表示梯度方向差;result表示基点的梯度方向与子点的梯度方向的差值;Among them,mafst represents the gradient direction of the base point;mascd represents the gradient direction of the sub-point;masub represents the gradient direction difference;result represents the difference between the gradient direction of the base point and the gradient direction of the sub-point;
以梯度方向差为约束条件,对冗余点对进行筛选过滤,统计点对梯度方向差所属数量最多的最大量化区间,保留最大量化区间的点对。Taking the gradient direction difference as the constraint condition, the redundant point pairs are screened and filtered, and the maximum quantization interval with the largest number of point pairs of gradient direction difference is counted, and the point pairs with the largest quantization interval are retained.
所述S1中参考表包括一级索引、二级索引和存储值,所述一级索引为子点内角,所述二级索引为边长比例,所述存储值包括基点梯度方向、子点梯度方向,点对中心点相对参考点的位移向量,点对线段长度。The reference table in S1 includes a primary index, a secondary index and a stored value, wherein the primary index is the inner angle of the sub-point, the secondary index is the side length ratio, and the stored value includes the base point gradient direction, the sub-point gradient direction, the displacement vector of the point to the center point relative to the reference point, and the point to line segment length.
所述S2中根据基点的梯度方向建立梯度方向查找表,根据基点及梯度方向差进行点对子点的筛选的具体操作为:In S2, a gradient direction lookup table is established according to the gradient direction of the base point, and the specific operation of selecting the sub-points according to the base point and the gradient direction difference is:
S2.1、建立梯度方向查找表,所述梯度方向查找表的索引为向下取整的梯度方向,值为梯度方向在各索引区间内的边缘点;S2.1. Establish a gradient direction lookup table, wherein the index of the gradient direction lookup table is the gradient direction rounded down, and the value is the edge point of the gradient direction within each index interval;
S2.2、基于梯度方向差最大量化区间,确定实际角度范围;S2.2, determining the actual angle range based on the maximum quantization interval of the gradient direction difference;
S2.3、根据已知基点的梯度方向,得到满足最大量化区间约束的子点的梯度方向范围;S2.3, according to the gradient direction of the known base point, obtain the gradient direction range of the sub-point that satisfies the maximum quantization interval constraint;
S2.4、根据子点的梯度方向范围,查找梯度方向查找表中对应的全部边缘点,即为筛选出的子点。S2.4. According to the gradient direction range of the sub-point, all corresponding edge points in the gradient direction lookup table are searched, which are the selected sub-points.
所述S3中获取目标图像点对的特征信息具体为根据点对构建局部三角形,根据子点处的三角形内角按照角度步长计算角度索引;根据基点与子点沿梯度方向的边长比例按照尺度步长计算尺度索引。The step of obtaining the feature information of the target image point pair in S3 specifically includes constructing a local triangle based on the point pair, calculating the angle index according to the angle step according to the inner angle of the triangle at the sub-point, and calculating the scale index according to the scale step according to the ratio of the side lengths of the base point and the sub-point along the gradient direction.
所述基于已建立的参考表与角度索引、尺度索引,获取目标图像相对模板图像的旋转角度与放缩尺度,得到刚体目标的参考点位置,其具体方法为:The method of obtaining the rotation angle and scaling scale of the target image relative to the template image based on the established reference table, angle index, and scale index, and obtaining the reference point position of the rigid target is as follows:
根据目标图像点对的角度索引与尺度索引,在参考表中查找获得模板图像对应点对的梯度方向、点对线段长度和位移向量;According to the angle index and scale index of the target image point pair, the gradient direction, point pair line segment length and displacement vector of the corresponding point pair of the template image are obtained in the reference table;
计算目标图像点对相对于模板图像点对的旋转角度;根据目标图像点对线段长度与模板图像点对线段长度,计算目标图像点对相对于模板图像点对的放缩尺度;Calculate the rotation angle of the target image point pair relative to the template image point pair; calculate the scaling scale of the target image point pair relative to the template image point pair based on the line segment length of the target image point pair and the line segment length of the template image point pair;
基于旋转角度与放缩尺度,根据参考表的位移向量计算目标图像中刚体目标的参考点位置。Based on the rotation angle and the scaling, the reference point position of the rigid object in the target image is calculated according to the displacement vector of the reference table.
在具体实施方式中,所述S2中网格化是基于模板图像的尺寸Sizetemp确定自适应尺寸的网格,所述网格的尺寸μwh的计算公式为:In a specific implementation, the gridding in S2 is to determine the grid of adaptive size based on the sizeSizetemp of the template image, and the calculation formula of the sizeμwh of the grid is:
, ,
式中,WT、HT分别表示模板图像的宽度和高度,Thresize表示最小网格划分的阈值。WhereWT andHT represent the width and height of the template image respectively, andThresize represents the minimum grid division threshold.
本发明还提供了一种抗遮挡的刚体目标快速匹配及位姿估计系统,包括:The present invention also provides an anti-occlusion rigid target fast matching and posture estimation system, comprising:
模板图像模块,用于提取模板图像中刚体目标的边缘点,通过边缘点构建点对;基于梯度方向差对已构建的点对进行筛选过滤,获取点对的特征信息建立参考表,所述特征信息包括子点内角、基点与子点沿梯度方向的边长比例;The template image module is used to extract the edge points of the rigid body target in the template image and construct point pairs through the edge points; the constructed point pairs are screened and filtered based on the gradient direction difference, and the feature information of the point pairs is obtained to establish a reference table, wherein the feature information includes the inner angle of the sub-point and the ratio of the side length of the base point to the sub-point along the gradient direction;
目标图像模块,用于获取目标图像进行边缘点提取,并基于网格化筛选,在包含边缘点的每个网格中,筛选梯度幅值最大的边缘点作为点对的基点;根据基点的梯度方向建立梯度方向查找表,根据基点及梯度方向差进行点对子点的筛选;The target image module is used to obtain the target image for edge point extraction, and based on grid screening, in each grid containing edge points, the edge point with the largest gradient amplitude is screened as the base point of the point pair; a gradient direction lookup table is established according to the gradient direction of the base point, and the sub-points of the point pair are screened according to the base point and the gradient direction difference;
点对索引模块,将筛选后的基点与子点分别构建点对,获取目标图像点对的特征信息,并根据特征信息的子点内角计算角度索引,根据基点与子点沿梯度方向边长比例计算尺度索引;基于已建立的参考表与角度索引、尺度索引,获取目标图像相对模板图像的旋转角度与放缩尺度,得到刚体目标的参考点位置;The point pair indexing module constructs point pairs with the filtered base points and sub-points respectively, obtains the feature information of the target image point pairs, calculates the angle index according to the sub-point inner angle of the feature information, and calculates the scale index according to the ratio of the side lengths of the base point and the sub-point along the gradient direction; based on the established reference table, angle index, and scale index, obtains the rotation angle and scaling scale of the target image relative to the template image, and obtains the reference point position of the rigid target;
匹配模块,用于对参考点位置进行投票,确定目标图像中刚体目标的边缘点与中心位置,对刚体目标的位姿进行匹配。The matching module is used to vote on the reference point position, determine the edge points and center position of the rigid target in the target image, and match the position and posture of the rigid target.
有益效果:本发明为一种抗遮挡的刚体目标快速匹配及位姿估计方法,利用边缘点之间组成点对构建旋转、尺度变化不变性的特征信息,可以将参数投票空间从四维降至二维,解决了广义霍夫变换参数维度过高的问题;通过二级点对筛选策略,对目标图像组成点对的基点,采用网格化筛选对其进行过滤;对组成点对的子点,采用梯度方向差量化区间选择筛选,删除冗余点,加速了匹配速度,减少匹配耗时;在进行投票时,只需要一种投票结果图即可获得刚体目标的位姿匹配,减少了存储空间。Beneficial effects: The present invention is a method for fast matching and posture estimation of rigid targets with resistance to occlusion. It uses point pairs between edge points to construct feature information that is invariant to rotation and scale changes, and can reduce the parameter voting space from four dimensions to two dimensions, solving the problem of too high parameter dimension of generalized Hough transform; through a two-level point pair screening strategy, the base points of the point pairs that make up the target image are filtered by grid screening; for the sub-points that make up the point pairs, gradient direction difference quantization interval selection screening is used to delete redundant points, thereby accelerating the matching speed and reducing the matching time; when voting, only one voting result graph is needed to obtain the posture matching of the rigid target, reducing the storage space.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1为方法的流程示意图;Fig. 1 is a schematic flow diagram of the method;
图2为梯度方向量化区间筛选点对的示意图,其中a为模板图像提取的边缘点图,b为统计梯度方向差的量化区间图,c为筛选后的点对图;FIG2 is a schematic diagram of the point pair screening of the gradient direction quantization interval, wherein a is the edge point diagram extracted from the template image, b is the quantization interval diagram of the statistical gradient direction difference, and c is the point pair diagram after screening;
图3为点对组成局部三角形示意图;Fig. 3 is a schematic diagram of a local triangle formed by point pairs;
图4为网格化筛选图,其中a为目标图像,b为网格图像,c为基点筛选图像,d为局部放大细节图;Figure 4 is a gridded screening image, where a is the target image, b is the grid image, c is the base point screening image, and d is the local magnified detail image;
图5为目标图像进行点对投票的示意图,其中a为目标图像,b为目标图像提取的边缘图像,c为点对构成图,d为投票计算图。FIG5 is a schematic diagram of point pair voting for a target image, wherein a is the target image, b is the edge image extracted from the target image, c is the point pair composition graph, and d is the voting calculation graph.
具体实施方式Detailed ways
下面将结合附图更详细地描述本公开的示例性实施方式。Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings.
参见图1,本实施例提供了一种抗遮挡的刚体目标快速匹配及位姿估计方法,具体包括以下步骤:Referring to FIG. 1 , this embodiment provides a method for fast matching and pose estimation of rigid targets with anti-occlusion, which specifically includes the following steps:
S1、提取模板图像中刚体目标的边缘点,通过边缘点构建点对;基于梯度方向差对已构建的点对进行筛选过滤,获取点对的特征信息建立参考表,所述特征信息包括子点内角、基点与子点沿梯度方向的边长比例;S1, extracting edge points of rigid objects in the template image, and constructing point pairs through the edge points; filtering the constructed point pairs based on the gradient direction difference, obtaining feature information of the point pairs to establish a reference table, wherein the feature information includes the inner angle of the sub-point, and the ratio of the side length of the base point to the sub-point along the gradient direction;
S2、获取目标图像进行边缘点提取,并基于网格化筛选,在包含边缘点的每个网格中,筛选梯度幅值最大的边缘点作为点对的基点;根据基点的梯度方向建立梯度方向查找表,根据基点及梯度方向差进行点对子点的筛选;S2, obtaining the target image to extract edge points, and based on grid screening, in each grid containing edge points, screening the edge points with the largest gradient amplitude as the base points of the point pairs; establishing a gradient direction lookup table according to the gradient direction of the base points, and screening the sub-points of the point pairs according to the base points and the gradient direction differences;
S3、将筛选后的基点与子点分别构建点对,获取目标图像点对的特征信息,并根据特征信息的子点内角计算角度索引,根据基点与子点沿梯度方向边长比例计算尺度索引;基于已建立的参考表与角度索引、尺度索引,获取目标图像相对模板图像的旋转角度与放缩尺度,得到刚体目标的参考点位置;S3, construct point pairs with the filtered base points and sub-points respectively, obtain feature information of the target image point pairs, and calculate the angle index according to the sub-point inner angle of the feature information, and calculate the scale index according to the ratio of the side lengths of the base point and the sub-point along the gradient direction; based on the established reference table and the angle index and scale index, obtain the rotation angle and scaling scale of the target image relative to the template image, and obtain the reference point position of the rigid target;
S4、对参考点位置进行投票,确定目标图像中刚体目标的边缘点与中心位置,对刚体目标的位姿进行匹配。S4. Vote for the reference point position, determine the edge point and center position of the rigid target in the target image, and match the position and posture of the rigid target.
所述S1中提取模板图像中刚体目标的边缘点,通过获取刚体目标的模板图像进行离线处理,采用Canny边缘检测对模板图像中刚体目标进行边缘点提取,其操作为:In S1, edge points of the rigid body target in the template image are extracted, and the template image of the rigid body target is obtained for offline processing, and the edge points of the rigid body target in the template image are extracted using Canny edge detection, and the operation is as follows:
S1.1、对模板图像进行高斯滤波,以平滑图像减少噪声干扰;S1.1, perform Gaussian filtering on the template image to smooth the image and reduce noise interference;
S1.2、对滤波后的模板图像进行梯度计算,计算模板图像中每个像素点的梯度值和方向;S1.2, performing gradient calculation on the filtered template image, and calculating the gradient value and direction of each pixel in the template image;
S1.3、对各像素点的梯度幅值进行非极大值抑制,以获取更精确的边缘信息;S1.3, perform non-maximum suppression on the gradient amplitude of each pixel to obtain more accurate edge information;
S1.4、计算最大类间方差对应的像素值作为Canny算子的强阈值Threstrong,取强阈值的1/2作为Canny算子的弱阈值Threweak,通过强阈值Threstrong与弱阈值Threweak检测符合阈值条件的边缘点,得到模板图像中刚体目标的边缘点。S1.4. Calculate the pixel value corresponding to the maximum inter-class variance as the strong thresholdThrestrong of the Canny operator, take 1/2 of the strong threshold as the weak thresholdThreweak of the Canny operator, and detect the edge points that meet the threshold conditions through the strong thresholdThrestrong and the weak thresholdThreweak to obtain the edge points of the rigid body target in the template image. .
所述S1中采用梯度方向量化区间对已构建的点对进行筛选过滤,过滤冗余点对,其操作具体为:In S1, the constructed point pairs are screened and filtered using the gradient direction quantization interval to filter the redundant point pairs. The specific operation is as follows:
基于提取的边缘点EK,对任意两个边缘点进行点对的构建,将全部边缘点之间组成点对,将点对的第一个点作为基点,将其余边缘点分别作为点对的第二个点,即作为子点/>,所述基点与子点组成的全部点对为:Based on the extracted edge pointsEK , point pairs are constructed for any two edge points, and all edge points are combined into point pairs, with the first point of the point pair as the base point , and use the remaining edge points as the second point of the point pair, that is, as sub-points/> , all point pairs consisting of the base point and sub-points are:
。 .
分别计算点对中基点的梯度方向与子点的梯度方向,并按照逆时针方向计算基点与子点的梯度方向差为:Calculate the gradient direction of the base point and the gradient direction of the sub-point in the point pair respectively, and calculate the gradient direction difference between the base point and the sub-point in the counterclockwise direction:
, ,
其中,mafst表示基点的梯度方向;mascd表示子点的梯度方向;masub表示梯度方向差;result表示基点的梯度方向与子点的梯度方向的差值。Among them,mafst represents the gradient direction of the base point;mascd represents the gradient direction of the sub-point;masub represents the gradient direction difference;result represents the difference between the gradient direction of the base point and the gradient direction of the sub-point.
如图2所示,所述梯度方向差masub是具有旋转不变性的,以梯度方向差作为约束条件,对冗余点对进行筛选过滤。As shown in FIG. 2 , the gradient direction differencemasub is rotationally invariant, and the gradient direction difference is used as a constraint condition to filter redundant point pairs.
所述梯度方向差masub的取值范围为[0,360),将梯度方向差进行区间量化,定义为一个量化区间,将梯度方向差量化到/>个区间内;统计点对梯度方向差所属数量最多的最大量化区间binmax,得到在该量化区间binmax的点对CPreq。The value range of the gradient direction differencemasub is [0, 360), and the gradient direction difference is interval quantized to define is a quantization interval, and the gradient direction difference is quantized to/> intervals; the maximum quantization intervalbinmax to which the gradient direction differences of point pairs belong is counted, and the point pairsCPreq in the quantization intervalbinmax are obtained.
根据筛选过滤后的点对,获取边缘点的特征信息建立参考表,根据点对组成局部三角形确定旋转、尺度变化不变性的特征信息作为参考表的索引,将参考表降至二维,实现减少耗时的目的,所述建立参考表具体为:According to the filtered point pairs, the feature information of the edge points is obtained to establish a reference table. The feature information of the rotation and scale change invariance is determined according to the point pairs forming the local triangle as the index of the reference table, and the reference table is reduced to two dimensions to achieve the purpose of reducing time consumption. The establishment of the reference table is specifically as follows:
参见图3,将点对之间的线段L,基点梯度方向的直线/>、子点/>梯度方向的直线/>组成局部三角形,所述子点/>处的三角形内角/>具有旋转不变性,所述子点内角/>通过子点/>的梯度方向mascd、基点/>到子点/>之间的方位角/>得到,其计算公式为:Referring to Figure 3, the line segment L between the point pairs, the base point Straight line in gradient direction/> , sub-point/> Straight line in gradient direction/> To form a local triangle, the subpoints/> The interior angle of the triangle at/> With rotation invariance, the sub-point interior angle/> By sub-point /> Gradient directionmascd , base point/> To sub-point/> The azimuth between The calculation formula is:
, ,
, ,
将子点内角作为参考表的一级索引,其取值范围为(0,180),按照角度步长,将其进行分组,分组的索引/>的计算公式为:The inner angle of the subpoint As the primary index of the reference table, its value range is (0,180), according to the angle step , group them, group index/> The calculation formula is:
。 .
局部三角形中所述基点与子点分别沿梯度方向的边长为、/>,所述边长比例/>具有尺度变化不变性,以基点梯度方向与子点梯度方向的边长比例作为参考表的二级索引,按照比例步长/>,将其进行分组,分组索引/>的计算公式为:The side lengths of the base point and sub-point in the local triangle along the gradient direction are 、/> , the side length ratio/> It is invariant to scale changes, and uses the side length ratio of the base point gradient direction to the sub-point gradient direction as the secondary index of the reference table. , group them, group index/> The calculation formula is:
, ,
根据子点内角、边长比例分别作为一、二级索引建立参考表,所述参考表还包括存储值,所述存储值包括梯度方向mafst、mascd,以及将模板图像中心点作为参考电,计算点对中心点相对于参考点/>的位移向量/>,以及点对线段长度/>,所述位移向量计算公式为:A reference table is established based on the sub-point inner angle and side length ratio as the primary and secondary indexes, respectively. The reference table also includes stored values, including gradient directionsmafst andmascd , and the center point of the template image is used as the reference point to calculate the point-to-center point Relative to reference point/> The displacement vector of , and the length of the line segment of the point/> , the displacement vector calculation formula is:
。 .
表1为参考表Table 1 is a reference table
所述S2需要对目标图像进行边缘点提取,并对边缘点构造点对,从而确定点对的旋转、尺度变化不变性特征,但在判断点对梯度方向差是否满足量化区间binmax时,需遍历全部边缘点,将其作为点对的基点,并遍历剩余边缘点作为点对的子点,计算点对之间的梯度方向差,再筛选出在量化区间binmax内的点对。这样会因为基点冗余造成较多的无效计算,降低匹配的效率。The S2 needs to extract edge points from the target image and construct point pairs for the edge points, so as to determine the rotation and scale change invariance features of the point pairs. However, when judging whether the gradient direction difference of the point pairs meets the quantization intervalbinmax , it is necessary to traverse all edge points, use them as the base points of the point pairs, and traverse the remaining edge points as the sub-points of the point pairs, calculate the gradient direction difference between the point pairs, and then select the point pairs within the quantization intervalbinmax . This will cause more invalid calculations due to the redundancy of base points, reducing the matching efficiency.
在具体实施方式中,参考图4,所述S2中基于网格化筛选,在包含边缘点的每个网格中,筛选梯度幅值最大的边缘点作为点对的基点,其操作具体为:In a specific implementation, referring to FIG. 4 , the S2 is based on grid-based screening, and in each grid containing edge points, the edge point with the largest gradient amplitude is screened as the base point of the point pair, and the specific operation is as follows:
网格化是基于模板图像的尺寸Sizetemp确定自适应尺寸的网格,所述网格的尺寸μwh的计算公式为:The gridding is to determine the adaptive size of the grid based on the sizeSizetemp of the template image. The calculation formula of the sizeμwh of the grid is:
, ,
式中,WT、HT分别表示模板图像的宽度和高度,Thresize表示最小网格划分的阈值。WhereWT andHT represent the width and height of the template image respectively, andThresize represents the minimum grid division threshold.
在包含目标图像边缘点的每个网格内,根据梯度幅值进行筛选,只筛选梯度幅值最大的边缘点作为点对的基点。经过网格化筛选,避免了全部边缘点作为基点组成点对,提高了匹配效率。In each grid containing the edge points of the target image, we filter according to the gradient amplitude, and only filter the edge points with the largest gradient amplitude as the base points of the point pairs. After grid-based screening, we avoid using all edge points as base points to form point pairs, thus improving the matching efficiency.
同时,所述S2在进行目标图像的点对构建时,同样需要满足和离散阶段对模板图像处理的梯度方向差量化约束,即需要根据已筛选出的基点,寻找符合梯度方向差约束要求的子点构建点对。At the same time, when constructing point pairs of the target image, S2 also needs to meet the gradient direction difference quantization constraints on the template image processing in the discrete stage, that is, it is necessary to find sub-points that meet the gradient direction difference constraint requirements based on the screened base points to construct point pairs.
所述S2根据基点的梯度方向建立梯度方向查找表,并通过梯度方向查找表进行子点筛选,将所有边缘点按照梯度方向进行存储,根据已筛选基点的梯度方向,查找满足梯度方向差的梯度方向表索引,取此索引下的全部边缘点作为子点,构建点对,其操作具体为:S2 establishes a gradient direction lookup table according to the gradient direction of the base point, and screens sub-points through the gradient direction lookup table, stores all edge points according to the gradient direction, and searches for the gradient direction table index that satisfies the gradient direction difference according to the gradient direction of the screened base point, takes all edge points under this index as sub-points, and constructs point pairs. The specific operation is as follows:
S2.1、建立梯度方向查找表,所述梯度方向查找表的索引为向下取整的梯度方向,值为梯度方向在各索引区间内的边缘点;S2.1. Establish a gradient direction lookup table, wherein the index of the gradient direction lookup table is the gradient direction rounded down, and the value is the edge point of the gradient direction within each index interval;
表2 梯度方向查找表Table 2 Gradient direction lookup table
所述梯度方向查找表的索引计算公式为:The index calculation formula of the gradient direction lookup table is:
, ,
其中,masrc表示目标图像边缘点的梯度方向。Among them,masrc represents the gradient direction of the edge point of the target image.
S2.2、基于梯度方向差最大量化区间,确定实际角度范围;S2.2, determining the actual angle range based on the maximum quantization interval of the gradient direction difference;
通过梯度方向差最大量化区间binmax,计算出实际的角度范围左值和右值,其公式为:The actual left value of the angle range is calculated by using the maximum quantization intervalbinmax of the gradient direction difference and rvalue , the formula is:
, ,
。 .
S2.3、根据已知基点的梯度方向,得到满足最大量化区间约束的子点的梯度方向范围为:S2.3. According to the gradient direction of the known base point, the gradient direction range of the sub-point that satisfies the maximum quantization interval constraint is obtained as follows:
。 .
S2.4、根据子点的梯度方向范围,查找梯度方向查找表中对应的全部边缘点,即为筛选出的子点。S2.4. According to the gradient direction range of the sub-point, all corresponding edge points in the gradient direction lookup table are searched, which are the selected sub-points.
通过将筛选后的基点与子点分别构建点对CPsource。S3根据点对构建局部三角形,如图5所示,根据点对构建局部三角形,计算子点处的三角形内角,并按照角度步长计算角度索引/>;计算基点与子点沿梯度方向的线段长度比例/>,同样按照尺度步长/>计算尺寸索引/>;By constructing point pairsCPsource from the selected base points and sub-points respectively. S3 constructs a local triangle based on the point pairs, as shown in Figure 5, and calculates the triangle internal angle at the sub-point. , and follow the angle step Calculate angle index /> ; Calculate the ratio of the line segment length between the base point and the sub-point along the gradient direction/> , also according to the scale step /> Calculate size index /> ;
根据目标图像点对的角度索引与尺度索引,在参考表中查找获得模板图像对应点对的梯度方向、点对线段长度和位移向量;According to the angle index and scale index of the target image point pair, the gradient direction, point pair line segment length and displacement vector of the corresponding point pair of the template image are obtained in the reference table;
计算目标图像点对相对于模板图像点对的旋转角度,其公式为:Calculate the rotation angle of the target image point pair relative to the template image point pair , the formula is:
, ,
式中,表示目标图像的点对的梯度方向之和;/>表示模板图像的点对的梯度方向之和。In the formula, Represents the sum of the gradient directions of the point pairs of the target image; /> Represents the sum of the gradient directions of the point pairs of the template image.
计算目标图像点对相对于模板图像点对的放缩尺度,其公式为:Calculate the scaling of the target image point pair relative to the template image point pair , the formula is:
, ,
式中,Lsrc表示目标图像点对线段长度;为模板图像点对线段长度。Where,Lsrc represents the lengthof the line segment of the target image point; is the length of the line segment of the template image point.
基于旋转角度和放缩尺度/>,根据点对中心点相对于参考点的位移向量/>,计算目标图像中刚体目标的参考点位置/>。Based on the rotation angle and scaling/> , according to the displacement vector of the point to the center point relative to the reference point/> , calculate the reference point position of the rigid body target in the target image/> .
所述S4中对参考点位置进行投票,计算公式为:In S4, the reference point position is voted, and the calculation formula is:
, ,
, ,
, ,
其中,表示目标图像参考点的坐标,/>表示位移向量坐标,表示旋转角度/>的旋转矩阵,/>表示目标图像点对位置的中心点坐标,/>和/>分别表示点对的基点位置坐标与子点位置坐标。in, Indicates the coordinates of the reference point of the target image, /> represents the displacement vector coordinates, Indicates the rotation angle/> The rotation matrix of Indicates the center point coordinates of the target image point pair position, /> and/> They represent the base point position coordinates and sub-point position coordinates of the point pair respectively.
另外,本申请还提供了一种抗遮挡的刚体目标快速匹配及位姿估计系统,包括:In addition, the present application also provides an anti-occlusion rigid target fast matching and pose estimation system, including:
模板图像模块,用于提取模板图像中刚体目标的边缘点,通过边缘点构建点对;基于梯度方向差对已构建的点对进行筛选过滤,获取点对的特征信息建立参考表,所述特征信息包括子点内角、基点与子点沿梯度方向的边长比例;The template image module is used to extract the edge points of the rigid body target in the template image and construct point pairs through the edge points; the constructed point pairs are screened and filtered based on the gradient direction difference, and the feature information of the point pairs is obtained to establish a reference table, wherein the feature information includes the inner angle of the sub-point and the ratio of the side length of the base point to the sub-point along the gradient direction;
目标图像模块,用于获取目标图像进行边缘点提取,并基于网格化筛选,在包含边缘点的每个网格中,筛选梯度幅值最大的边缘点作为点对的基点;根据基点的梯度方向建立梯度方向查找表,根据基点及梯度方向差进行点对子点的筛选;The target image module is used to obtain the target image for edge point extraction, and based on grid screening, in each grid containing edge points, the edge point with the largest gradient amplitude is screened as the base point of the point pair; a gradient direction lookup table is established according to the gradient direction of the base point, and the sub-points of the point pair are screened according to the base point and the gradient direction difference;
点对索引模块,将筛选后的基点与子点分别构建点对,获取目标图像点对的特征信息,并根据特征信息的子点内角计算角度索引,根据基点与子点沿梯度方向边长比例计算尺度索引;基于已建立的参考表与角度索引、尺度索引,获取目标图像相对模板图像的旋转角度与放缩尺度,得到刚体目标的参考点位置;The point pair indexing module constructs point pairs with the filtered base points and sub-points respectively, obtains the feature information of the target image point pairs, calculates the angle index according to the sub-point inner angle of the feature information, and calculates the scale index according to the ratio of the side lengths of the base point and the sub-point along the gradient direction; based on the established reference table, angle index, and scale index, obtains the rotation angle and scaling scale of the target image relative to the template image, and obtains the reference point position of the rigid target;
匹配模块,用于对参考点位置进行投票,确定目标图像中刚体目标的边缘点与中心位置,对刚体目标的位姿进行匹配。The matching module is used to vote on the reference point position, determine the edge points and center position of the rigid target in the target image, and match the position and posture of the rigid target.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410479462.XACN118097191B (en) | 2024-04-22 | 2024-04-22 | Anti-shielding rigid body target quick matching and pose estimation method and system |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410479462.XACN118097191B (en) | 2024-04-22 | 2024-04-22 | Anti-shielding rigid body target quick matching and pose estimation method and system |
| Publication Number | Publication Date |
|---|---|
| CN118097191A CN118097191A (en) | 2024-05-28 |
| CN118097191Btrue CN118097191B (en) | 2024-06-21 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202410479462.XAActiveCN118097191B (en) | 2024-04-22 | 2024-04-22 | Anti-shielding rigid body target quick matching and pose estimation method and system |
| Country | Link |
|---|---|
| CN (1) | CN118097191B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103456005A (en)* | 2013-08-01 | 2013-12-18 | 华中科技大学 | Method for matching generalized Hough transform image based on local invariant geometrical characteristics |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7016539B1 (en)* | 1998-07-13 | 2006-03-21 | Cognex Corporation | Method for fast, robust, multi-dimensional pattern recognition |
| US11941863B2 (en)* | 2021-08-04 | 2024-03-26 | Datalogic Ip Tech S.R.L. | Imaging system and method using a multi-layer model approach to provide robust object detection |
| CN115170669B (en)* | 2022-09-05 | 2022-11-22 | 合肥安迅精密技术有限公司 | Identification and positioning method and system based on edge feature point set registration and storage medium |
| CN115775278B (en)* | 2023-02-13 | 2023-05-05 | 合肥安迅精密技术有限公司 | Element identification positioning method and system containing local feature constraint and storage medium |
| CN117870659A (en)* | 2024-01-08 | 2024-04-12 | 东南大学 | Visual inertial integrated navigation algorithm based on dotted line characteristics |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103456005A (en)* | 2013-08-01 | 2013-12-18 | 华中科技大学 | Method for matching generalized Hough transform image based on local invariant geometrical characteristics |
| Title |
|---|
| 基于边缘几何特征的高性能模板匹配算法;吴晓军;邹广华;;仪器仪表学报;20130715(07);全文* |
| Publication number | Publication date |
|---|---|
| CN118097191A (en) | 2024-05-28 |
| Publication | Publication Date | Title |
|---|---|---|
| CN109887015B (en) | A Point Cloud Automatic Registration Method Based on Local Surface Feature Histogram | |
| CN110334762B (en) | Feature matching method based on quad tree combined with ORB and SIFT | |
| CN108830279B (en) | Image feature extraction and matching method | |
| CN112364881B (en) | An Advanced Sampling Consistency Image Matching Method | |
| CN109816051B (en) | Hazardous chemical cargo feature point matching method and system | |
| CN109034065B (en) | Indoor scene object extraction method based on point cloud | |
| CN111767960A (en) | An image matching method and system applied to image 3D reconstruction | |
| CN102722887A (en) | Image registration method and device | |
| CN108229500A (en) | A kind of SIFT Mismatching point scalping methods based on Function Fitting | |
| CN115471682A (en) | An Image Matching Method Based on SIFT Fusion ResNet50 | |
| CN104240231A (en) | Multi-source image registration based on local structure binary pattern | |
| CN111340134B (en) | A Fast Template Matching Method Based on Local Dynamic Warping | |
| CN102446356A (en) | Parallel self-adaptive matching method for obtaining remote sensing images with uniformly distributed matching points | |
| CN106340010A (en) | Corner detection method based on second-order contour difference | |
| CN114199205A (en) | Binocular ranging method based on improved quadtree ORB algorithm | |
| CN118967702B (en) | Pose estimation method, medium and equipment combining two-dimensional image and three-dimensional point cloud | |
| CN116862960A (en) | Workpiece morphology point cloud registration method, device, equipment and storage medium | |
| CN116958212A (en) | Point cloud registration method based on plane primitives in structured scene | |
| CN110599478A (en) | Image area copying and pasting tampering detection method | |
| CN110705569A (en) | Image local feature descriptor extraction method based on texture features | |
| CN114964206A (en) | Monocular vision odometer target pose detection method | |
| CN107038710B (en) | A Visual Tracking Algorithm Targeting Paper | |
| CN118097191B (en) | Anti-shielding rigid body target quick matching and pose estimation method and system | |
| CN113963070A (en) | Circular calibration plate identification method | |
| CN117036623B (en) | A matching point screening method based on triangulation |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |