技术领域technical field
本发明属于雷达目标识别技术领域,尤其涉及一种SAR图像多源特征融合的自动目标识别方法。可用于SAR图像的目标分类与识别。The invention belongs to the technical field of radar target recognition, in particular to an automatic target recognition method for multi-source feature fusion of SAR images. It can be used for target classification and recognition in SAR images.
背景技术Background technique
合成孔径雷达(Synthetic Aperture Radar,SAR)作为一种主动式微波成像传感器,应用于主动雷达导引头,具有全天候,全天时的探测能力,在复杂战场环境中具有较强的自主性和抗干扰能力。但是受SAR成像分辨率较低,图像扭曲和包含背景等严重影响给目标识别带来了严峻挑战。Synthetic Aperture Radar (SAR), as an active microwave imaging sensor, is applied to the active radar seeker. Interference ability. However, due to the low resolution of SAR imaging, the serious influence of image distortion and background inclusion has brought severe challenges to target recognition.
目前关于原始SAR图像通过多源特征融合,增加目标信息利用率,克服单源传感器对目标信息获取的片面性,提升自动目标图像识别精度和鲁棒研究既是难点也是热点。一方面由于原始SAR图像对方位角等参数敏感,将不同视角图像融合为一幅独立图像效果并不理想。另一方面不同类别的特征融合本身就十分困难,因此多源特征融合的目标识别方法需要进一步探索和研究。At present, it is both difficult and hot to study how to increase the utilization rate of target information through multi-source feature fusion of original SAR images, overcome the one-sidedness of single-source sensor in acquiring target information, and improve the accuracy and robustness of automatic target image recognition. On the one hand, because the original SAR image is sensitive to parameters such as azimuth, the effect of fusing images from different perspectives into an independent image is not ideal. On the other hand, the feature fusion of different categories is very difficult in itself, so the target recognition method of multi-source feature fusion needs further exploration and research.
发明内容Contents of the invention
本发明的发明目的在于:针对目标大小,方位,旋转等变化以及强杂波背景给目标识别带来的严重影响,提出将目标三维模型投影到二维平面,采用余弦傅里叶不变矩和瑞利分布的CFAR检测方法分别对其余弦傅里叶矩和峰值特征进行提取,并利用级联融合分类器对目标进行特征级融合识别,实现了在特征维数高和姿态变化下的目标识别,而不额外增加对制导控制系统开销。本发明具有识别实时性好,识别结果鲁棒性强和识别精度高等特点。The invention purpose of the present invention is: aiming at the serious influence that changes such as target size, azimuth, rotation and strong clutter background bring to target identification, it is proposed to project the three-dimensional model of the target onto a two-dimensional plane, using cosine Fourier invariant moments and The CFAR detection method of the Rayleigh distribution extracts the cosine Fourier moment and peak features respectively, and uses the cascade fusion classifier to perform feature-level fusion recognition on the target, realizing the target recognition under the condition of high feature dimension and attitude change , without additional overhead on the guidance and control system. The invention has the characteristics of good real-time recognition, strong robustness of recognition results, high recognition precision and the like.
本发明的技术方案包括如下:Technical scheme of the present invention comprises as follows:
一、方案思路1. Program idea
采用余弦傅里叶矩和瑞利分布的CFAR检测方法分别对目标投影后的二维图像和原始SAR图像进行特征提取,构建目标图像的矩和峰值特征向量的级联融合分类器,实现多源特征融合在目标特征维数高和姿态变化下目标识别的目的。The CFAR detection method of cosine Fourier moment and Rayleigh distribution is used to extract the features of the projected two-dimensional image and the original SAR image respectively, and construct the cascade fusion classifier of the moment and peak feature vector of the target image to realize multi-source Feature fusion is used for the purpose of target recognition under high target feature dimension and pose changes.
二、实现步骤2. Implementation steps
本发明提出的一种多源特征融合的原始SAR图像自动目标识别方法,包括如下步骤:The original SAR image automatic target recognition method of a kind of multi-source feature fusion that the present invention proposes, comprises the following steps:
步骤S1:输入不同目标的原始SAR图像作为练样本集,并对训练样本集进行预处理:Step S1: Input the original SAR images of different targets as the training sample set, and preprocess the training sample set:
S101:基于目标三维外形仿真数据,建立原始SAR图像的目标三维形状模型,并将所述三维模型投影到二维平面,得到笛卡尔坐标下的二维图像f(x,y),并对图像f(x,y)进行标准化处理,得到标准化后的图像f(m,n),以及计算图像f(m,n)的极坐标图像,得到模型投影极坐标图像f(r,θ);S101: Based on the 3D shape simulation data of the target, establish a 3D shape model of the target in the original SAR image, and project the 3D model onto a 2D plane to obtain a 2D image f(x, y) in Cartesian coordinates, and analyze the image f(x,y) is standardized to obtain the standardized image f(m,n), and the polar coordinate image of the calculated image f(m,n) is obtained to obtain the model projected polar coordinate image f(r,θ);
S102:对原始SAR图像进行二值化处理后,再进行边缘检测,得到边缘图像,再将边缘图像从笛卡尔坐标变换到极坐标,得到SAR极坐标图像f′(r′,θ′),其中边缘检测可以是惯用的任一方法,例如梯度边缘检测算法。S102: After binarizing the original SAR image, then perform edge detection to obtain an edge image, and then transform the edge image from Cartesian coordinates to polar coordinates to obtain a SAR polar coordinate image f'(r', θ'), Wherein the edge detection can be any conventional method, such as gradient edge detection algorithm.
S103:对原始SAR图像进行目标切片处理,得到原始SAR图像目标切片。S103: Perform target slice processing on the original SAR image to obtain the target slice of the original SAR image.
步骤S2:采用余弦傅里叶矩特征提取方法,分别对训练样本集的模型投影极坐标图像f(r,θ)、极坐标形式图像f′(r′,θ′)进行矩特征提取,得到训练样本的矩特征。Step S2: Use the cosine Fourier moment feature extraction method to extract moment features from the model projected polar coordinate image f(r, θ) and the polar coordinate form image f'(r', θ') of the training sample set respectively, and obtain Moment features of the training samples.
步骤S3:对训练样本集的原始SAR图像目标切片进行峰值特征提取。Step S3: Perform peak feature extraction on the original SAR image target slice of the training sample set.
步骤S301:对原始SAR图像目标切片进行目标与背景检测:Step S301: Perform target and background detection on the original SAR image target slice:
将瑞利分布带入CFAR(恒虚警率)检测算子中可得:其中,bs为瑞利分布形状参数,Z表示杂波强度。化解pFA,可得从而可得基于瑞利分布滑动窗口CFAR检测算子的阈值Rayleigh distribution Bring it into the CFAR (Constant False Alarm Rate) detection operator to get: Among them, bs is the Rayleigh distribution shape parameter, Z represents the clutter strength. Decompose pFA , get Therefore, the threshold value of the CFAR detection operator based on the Rayleigh distribution sliding window can be obtained
基于阈值T对原始SAR图像目标切片中的目标、背景进行检测分割:若原始SAR图像目标切片的局部中心像素xc>T,则xc为目标像素,否则xc为背景像素;由原始SAR图像目标切片的目标像素得到目标分割图像;Based on the threshold T, the target and background in the original SAR image target slice are detected and segmented: if the local center pixel xc of the original SAR image target slice > T, then xc is the target pixel, otherwise xc is the background pixel; the original SAR image The target pixel of the image target slice obtains the target segmented image;
步骤S302:采用ω×ω的矩形形态学滤波器对步骤S301的分割图像进行闭滤波处理,再对闭滤波处理后的分割图像进行计数滤波处理,剔除滤波窗口结构区域中峰值像素(滤波窗口结构区域中大于预设阈值的像素)填充率不大于(20±5)%的中心像素,得到计数滤波结果。其中,计数滤波的滤波窗口结构可以是矩形或者圆形,若为矩形,则矩形的最长的边的尺寸为ω;若为圆形,则其直接为ω,ω为预设值,取决于原始SAR图像目标切片的本身大小,如5、6等。Step S302: Use the rectangular morphological filter of ω×ω to perform closed filter processing on the segmented image in step S301, and then perform counting filter processing on the segmented image after closed filter processing, and remove peak pixels in the filter window structure area (filter window structure Pixels greater than the preset threshold in the area) fill rate is not greater than (20±5)% of the central pixel, and the result of the counting filter is obtained. Among them, the filter window structure of the counting filter can be a rectangle or a circle. If it is a rectangle, the size of the longest side of the rectangle is ω; if it is a circle, it is directly ω, and ω is a preset value, depending on The size of the original SAR image target slice, such as 5, 6, etc.
将计数滤波后的目标分割图像中的非零值设为1,其它设为0,得到和目标分割图像同样大小的掩膜模板。为增强滤波后图像中目标区域的信息,利用掩膜模板与原始SAR图像对应项(同一位置像素)相乘,得到最终目标分割图像。Set the non-zero value in the target segmented image after counting filtering to 1, and set the others to 0 to obtain a mask template with the same size as the target segmented image. In order to enhance the information of the target area in the filtered image, the mask template is used to multiply the corresponding item (pixel at the same position) of the original SAR image to obtain the final target segmented image.
步骤S303:对步骤S302得到的最终目标分割图像进行峰值特征提取:Step S303: Perform peak feature extraction on the final target segmented image obtained in step S302:
计算最终目标分割图像的各像素的度量值pij,其中下标为像素的坐标标识符:Calculate the metric value pij of each pixel of the final target segmented image, where the subscript is the coordinate identifier of the pixel:
其中aij表示当前像素的像素值,顶点U(aij)表示以aij为中心的邻域(如以八领域),am,n表示aij的邻域U(aij)中的单个像素值;σ为最终目标分割图像的像素强度的标准差。Where aij represents the pixel value of the current pixel, the vertex U(aij ) represents the neighborhood centered on aij (such as eight domains), and am, n represents a single in the neighborhood U(aij ) of aij Pixel value; σ is the standard deviation of the pixel intensity of the final target segmented image.
若度量值pij为1,则当前像素为峰值像素,即当前像素为峰值点像素;度量值pij为0,则表示当前像素为非峰值像素。If the measurement value pij is 1, the current pixel is a peak pixel, that is, the current pixel is a peak point pixel; if the measurement value pij is 0, it indicates that the current pixel is a non-peak pixel.
步骤S304:对最终目标分割图像中提取的所有峰值点的幅度进行归一化处理,得到相对目标峰值幅值:其中,Xj表示最终目标分割图像的第j个峰值点,V表示最终目标分割图像的峰值点数目,a(Xj)表示第j个峰值点Xj的幅值。Step S304: Perform normalization processing on the amplitudes of all peak points extracted in the final target segmented image to obtain the relative target peak amplitude: Among them, Xj represents the jth peak point of the final target segmented image, V represents the number of peak points of the final target segmented image, and a(Xj ) represents the amplitude of the jth peak point Xj .
步骤S4:建立级联融合分类器。Step S4: Establish a cascaded fusion classifier.
步骤S401:利用SVM(向量支持机)分类器作为第一级特征分类器,基于预设类别数目h,对各训练样本的矩特征向量进行SVM分类器训练,得到h个SVM类别模板,完成第一级特征分类器训练。Step S401: Using the SVM (Vector Support Machine) classifier as the first-level feature classifier, based on the preset number of categories h, conduct SVM classifier training on the moment feature vectors of each training sample, obtain h SVM category templates, and complete the first One-level feature classifier training.
步骤S402:利用峰值匹配分类器作为第二级特征分类器,基于预设类别数目h,对各训练样本的峰值特征向量进行峰值匹配分类器训练,得到h个峰值特征类别的模板,完成第二级特征分类器训练。Step S402: Using the peak matching classifier as the second-level feature classifier, based on the preset number of categories h, perform peak matching classifier training on the peak feature vectors of each training sample to obtain templates of h peak feature categories, and complete the second Level feature classifier training.
步骤S403:由第一级、第二级特征分类器的级联得到级联融合分类器.Step S403: Obtain a cascade fusion classifier from the cascade of the first-level and second-level feature classifiers.
步骤S5:输入待识别的SAR图像,进行特征提取,完成目标识别处理。Step S5: Input the SAR image to be recognized, perform feature extraction, and complete target recognition processing.
步骤S501:对待识别SAR图像进行预处理:Step S501: Preprocessing the SAR image to be recognized:
采用步骤S101相同的处理方式,得到待识别SAR图像的模型投影极坐标图像f(r,θ),采用步骤S102相同的处理方式,得到待识别SAR图像的SAR极坐标图像f′(r′,θ′);Using the same processing method in step S101, the model projected polar coordinate image f(r, θ) of the SAR image to be recognized is obtained, and the SAR polar coordinate image f'(r', θ) of the SAR image to be recognized is obtained by using the same processing method in step S102 θ');
对待识别SAR图像进行目标切片处理,得到待识别SAR图像目标切片;Perform target slice processing on the SAR image to be identified to obtain the target slice of the SAR image to be identified;
步骤S502:采用余弦傅里叶矩特征提取方法,分别对待识别SAR图像的模型投影极坐标图像f(r,θ)、SAR极坐标图像f′(r′,θ′)进行矩特征提取,得到待识别SAR图像的矩特征;Step S502: Using the cosine Fourier moment feature extraction method, respectively perform moment feature extraction on the model projected polar coordinate image f(r, θ) and the SAR polar coordinate image f'(r', θ') of the SAR image to be recognized, and obtain The moment feature of the SAR image to be identified;
步骤S503:将待识别SAR图像的矩特征向量(简称待识别对象)输入第一级特征分类器进行初级分类识别,则第一级特征分类器输出的待识别对象属于各类别的后验概率集合为Pset={p1,p2,...,ph},将待识别对象为K的分类置信度表示为:其中,i=1,…,h,为集合Pset中除之外其他元素构成的集合,pi表示待识别对象属于各类别的后验概率。Step S503: Input the moment feature vector of the SAR image to be recognized (referred to as the object to be recognized) into the first-level feature classifier for primary classification recognition, then the object to be recognized output by the first-level feature classifier belongs to the posterior probability set of each category As Pset ={p1 ,p2 ,...,ph }, the classification confidence of the object to be recognized as K is expressed as: where i=1,...,h, In addition to the set Pset A set composed of other elements besides , pi represents the posterior probability that the object to be identified belongs to each category.
若置信度小于或等于置信阈值,则继续进行第二级特征分类器的识别处理,并将后验概率集合Pset当作峰值特征匹配识别过程中先验信息的输入;否则,输出第一级特征分类器的识别结果,即后验概率集合为Pset={p1,p2,...,ph}中取值最大的后验概率所对应的类别为当前待识别对象的识别结果。If the confidence degree is less than or equal to the confidence threshold, continue the identification process of the second-level feature classifier, and use the posterior probability set Pset as the input of the prior information in the peak feature matching identification process; otherwise, output the first-level The recognition result of the feature classifier, that is, the posterior probability set is Pset ={p1 ,p2 ,...,ph }, the category corresponding to the largest posterior probability is the recognition result of the current object to be recognized .
步骤S504:利用峰值匹配分类器进行第二级特征分类器的识别处理:Step S504: Use the peak matching classifier to perform identification processing of the second-level feature classifier:
用G={g1,g2,...,gh}表示识别的SAR图像与第二级特征分类器的类别模板之间的相似度集合,即峰值特征相似度集合为G,将峰值匹配分类器输出的目标相似度gi(i=1,…,h)进行增大变换,则变换后的相似度g′i为:其中,k为峰值特征相似度集合G中互斥特征相似度标号;Use G={g1 ,g2 ,...,gh } to represent the similarity set between the recognized SAR image and the category template of the second-level feature classifier, that is, the peak feature similarity set is G, and the peak Match the target similarity gi (i=1,...,h) output by the classifier and perform the enlargement transformation, then the transformed similarity g′i is: Among them, k is the mutually exclusive feature similarity label in the peak feature similarity set G;
步骤S505:由后验概率pi与增大变换后的目标相似度g′i之和得到级联融合分类器的分类识别度量值Di,即Di=pi+g′i,其中i=1,…,h;Step S505: Obtain the classification recognition metric Di of the cascaded fusion classifier from the sum of the posterior probability pi and the increased transformed target similarity g'i , that is, Di = pi + g'i , where i = 1,...,h;
再由h个Di的最大值得到当前输入待识别的SAR图像的所属类别。Then, the category of the currently input SAR image to be identified is obtained from the maximum value of the h Di .
本发明在提取原始SAR图像的矩特征时,对投影图像优先进行标准化处理,可显著减少目标特征视图数量;以及步骤2是采用余弦傅里叶不变矩对三维目标模型在二维平面的投影图像进行矩特征提取,以充分利用其在平移、旋转、伸缩情况下的不变特性,使得特征维数降低而减少计算量;级联融合分类器的采用,提高了本发明的识别概率和鲁棒性。且在整个识别处理过程中,发明无需人工干预,自动完成目标识别。When the present invention extracts the moment feature of the original SAR image, the projected image is preferentially standardized, which can significantly reduce the number of target feature views; and step 2 is to use the cosine Fourier invariant moment to project the three-dimensional target model on the two-dimensional plane The moment feature extraction is carried out on the image, to make full use of its invariant characteristics under translation, rotation and stretching conditions, so that the feature dimension is reduced and the amount of calculation is reduced; the adoption of the cascade fusion classifier improves the recognition probability and robustness of the present invention. Stickiness. And in the whole identification process, the invention can automatically complete the target identification without human intervention.
综上所述,由于采用了上述技术方案,本发明的有益效果是:In summary, owing to adopting above-mentioned technical scheme, the beneficial effect of the present invention is:
1、识别的实时性有所提高:本发明采用余弦傅里叶矩对不变矩特征提取,避免预先存储所有不断变化的位置、距离、姿态等信息减少了冗余视图特征,使得计算量降低。1. The real-time performance of recognition has been improved: the present invention uses cosine Fourier moments to extract invariant moment features, avoids pre-stored information such as all changing positions, distances, and attitudes, reduces redundant view features, and reduces the amount of calculation .
2、识别的鲁棒性有所增强:本发明采用基于SVM和匹配算法结合的级联融合分类器对矩特征和峰值特征进行了两次识别,增强了该方法识别结果鲁棒性。2. The robustness of identification is enhanced: the present invention adopts the cascade fusion classifier based on the combination of SVM and matching algorithm to identify the moment feature and the peak feature twice, which enhances the robustness of the identification result of the method.
3、识别性能有所提高:与采用单一的SVM方法和匹配算法等相比,本发明采用多源特征融合方法更易于目标识别且不额外增加制导系统开销。3. The recognition performance is improved: Compared with the single SVM method and matching algorithm, the multi-source feature fusion method of the present invention is easier to recognize the target and does not increase the overhead of the guidance system.
附图说明Description of drawings
图1为本发明实施例提供的目标方法实现流程示意图;Fig. 1 is a schematic diagram of the implementation process of the target method provided by the embodiment of the present invention;
图2为本发明为本发明实施例实现方法图。Fig. 2 is a diagram of the implementation method of the present invention according to the embodiment of the present invention.
具体实施方式Detailed ways
为使本发明的目的、技术方案和优点更加清楚,下面结合实施方式和附图,对本发明作进一步地详细描述。In order to make the purpose, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the implementation methods and accompanying drawings.
本发明针对目标存在方位模板缺失时导致单一识别算法识别概率不高和鲁棒性不强的问题,引入目标CAD(Computer Aided Design,计算机辅助建模)模型投影图像方法填补目标方位角存在缺失的情形,并提取其矩特征辅助峰值特征识别。以提升对原始SAR图像的目标识别的实时性、识别结果鲁棒性和识别精度。参见图1、2,本发明具体实施步骤如下:The present invention aims at the problem of low recognition probability and weak robustness of a single recognition algorithm when the target has a missing azimuth template, and introduces a target CAD (Computer Aided Design, computer-aided modeling) model projection image method to fill in the lack of target azimuth. situation, and extract its moment features to assist peak feature recognition. In order to improve the real-time performance, the robustness of the recognition results and the recognition accuracy of the original SAR image target recognition. Referring to Fig. 1, 2, the specific implementation steps of the present invention are as follows:
步骤S1:读取不同目标的原始SAR图像,作为本具体实施方式的训练样本集和测试样本集。并对训练样本集和测试样本集进行预处理:Step S1: read the original SAR images of different targets as the training sample set and test sample set in this embodiment. And preprocess the training sample set and test sample set:
S101:基于目标三维外形仿真数据,建立原始SAR图像的目标三维形状模型,并将所述三维模型投影到二维平面,得到笛卡尔坐标下的二维图像f(x,y),并对图像f(x,y)进行标准化处理,得到标准化后的图像f(m,n),以及计算图像f(m,n)的极坐标图像,得到模型投影极坐标图像f(r,θ);S101: Based on the 3D shape simulation data of the target, establish a 3D shape model of the target in the original SAR image, and project the 3D model onto a 2D plane to obtain a 2D image f(x, y) in Cartesian coordinates, and analyze the image f(x,y) is standardized to obtain the standardized image f(m,n), and the polar coordinate image of the calculated image f(m,n) is obtained to obtain the model projected polar coordinate image f(r,θ);
S102:对原始SAR图像进行二值化处理后,再通过梯度边缘检测算法对二值化处理后的原始SAR图像进行边缘检测,得到边缘图像,再将边缘图像从笛卡尔坐标变换到极坐标,得到SAR极坐标图像f′(r′,θ′);S102: after performing binarization processing on the original SAR image, and then performing edge detection on the binarized original SAR image through a gradient edge detection algorithm to obtain an edge image, and then transforming the edge image from Cartesian coordinates to polar coordinates, Obtain the SAR polar coordinate image f'(r', θ');
S103:对原始SAR图像进行目标切片处理,得到原始SAR图像目标切片。S103: Perform target slice processing on the original SAR image to obtain the target slice of the original SAR image.
步骤S2:采用余弦傅里叶矩特征提取方法,分别提取训练样本集、测试样本集的矩特征。即分别对训练样本集的模型投影极坐标图像f(r,θ)、SAR极坐标图像f′(r′,θ′)进行矩特征提取,得到训练样本的矩特征;分别对测试样本集的模型投影极坐标图像f(r,θ)、SAR极坐标图像f′(r′,θ′)进行矩特征提取,得到测试样本的矩特征。Step S2: using the cosine Fourier moment feature extraction method to extract the moment features of the training sample set and the test sample set respectively. That is, the moment feature extraction is performed on the model projected polar coordinate image f(r, θ) and the SAR polar coordinate image f'(r', θ') of the training sample set to obtain the moment feature of the training sample set; The model projects the polar coordinate image f(r, θ) and the SAR polar coordinate image f'(r', θ') for moment feature extraction to obtain the moment feature of the test sample.
步骤S3:对训练样本和测试样本分别进行峰值特征提取,提取对象为原始SAR图像目标切片。Step S3: Perform peak feature extraction on the training sample and the test sample respectively, and the extracted object is the target slice of the original SAR image.
步骤S301:对原始SAR图像切片中的目标与背景进行检测,根据瑞利分布:Step S301: Detect the target and background in the original SAR image slice, according to the Rayleigh distribution:
其中,bs为瑞利分布的形状参数,Z表示杂波强度。Among them, bs is the shape parameter of the Rayleigh distribution, Z represents the clutter strength.
将公式(1)带入CFAR检测算子中可得:Put formula (1) into the CFAR detection operator:
化解公式(2)可得:Solve formula (2) to get:
因此,得到基于瑞利分布滑动窗口CFAR检测算法的阈值T为:Therefore, the threshold T of the CFAR detection algorithm based on the Rayleigh distribution sliding window is:
步骤S302:基于阈值T对原始SAR图像目标切片中的目标、背景进行分割:若原始SAR图像目标切片的目标局部中心像素xc>T,则xc为目标像素,否则xc为背景像素;由原始SAR图像目标切片的目标像素得到目标分割图像;Step S302: Segment the target and background in the target slice of the original SAR image based on the threshold T: if the target local center pixel xc > T of the target slice of the original SAR image, then xc is the target pixel, otherwise xc is the background pixel; Obtain the target segmentation image from the target pixels of the original SAR image target slice;
步骤S303:采用5×5的矩形形态学滤波器对步骤S302的目标分割图像进行闭滤波处理,再对闭滤波处理后的目标分割图像进行计数滤波处理,剔除滤波窗口结构区域中峰值像素填充率不大于(20±5)%的中心像素,得到计数滤波结果;Step S303: Use a 5×5 rectangular morphological filter to perform closed filter processing on the target segmented image in step S302, and then perform counting filter processing on the target segmented image after the closed filter process, and eliminate the peak pixel filling rate in the filter window structure area Not more than (20±5)% of the central pixels, get the counting filter result;
将计数滤波结果后的非零值设为1,其它设为0,得到和目标分割图像同样大小的掩膜模板。为增强滤波后图像中目标区域的信息,利用掩膜模板与原始SAR图像对应项(同一位置像素)相乘,得到最终目标分割图像。Set the non-zero value after the counting filter result to 1, and set the others to 0 to obtain a mask template with the same size as the target segmented image. In order to enhance the information of the target area in the filtered image, the mask template is used to multiply the corresponding item (pixel at the same position) of the original SAR image to obtain the final target segmented image.
步骤S304:对步骤S303得到的最终目标分割图像进行峰值特征提取:Step S304: Perform peak feature extraction on the final target segmented image obtained in step S303:
分别对最终目标分割后图像目标的行顶点,列顶点和二维顶点进行峰值特征提取,即可得到原始SAR图像峰值特征。计算最终目标分割图像的各像素的度量值pij:The peak features of the original SAR image can be obtained by extracting the peak features of the row vertices, column vertices and two-dimensional vertices of the image target after the final target segmentation. Calculate the metric value pij of each pixel of the final target segmented image:
其中,下标i、j为像素的坐标标识符,aij表示当前像素的像素值,顶点U(aij)表示以aij为中心的八邻域,am,n表示aij的八个邻域U(aij)中的单个像素值,σ为最终目标分割图像的像素强度的标准差;Among them, the subscripts i and j are the coordinate identifiers of the pixels, aij represents the pixel value of the current pixel, the vertex U(aij ) represents the eight neighbors centered on aij , and am, n represent the eight points of aij A single pixel value in the neighborhood U(aij ), σ is the standard deviation of the pixel intensity of the final target segmented image;
若当前像素点的度量值pij为1,则将当前像素记为峰值像素点,否则为背景杂波;If the measurement value pij of the current pixel is 1, the current pixel is recorded as the peak pixel, otherwise it is the background clutter;
步骤S305:对最终目标分割图像中的所有峰值点的幅度进行归一化处理,得到相对目标峰值幅值:Step S305: Perform normalization processing on the amplitudes of all peak points in the final target segmented image to obtain relative target peak amplitudes:
其中,Xj表示最终目标分割图像的第j个峰值点,V表示最终目标分割图像的峰值点数目,a(Xj)表示第j个峰值点Xj的幅度值。Among them, Xj represents the jth peak point of the final target segmented image, V represents the number of peak points of the final target segmented image, and a(Xj ) represents the amplitude value of the jth peak point Xj .
到此完成训练样本、测试样本的峰值特征提取和归一化处理。At this point, the peak feature extraction and normalization processing of the training samples and test samples are completed.
步骤S4:基于训练样本构建级联融合分类器:Step S4: Construct a cascaded fusion classifier based on the training samples:
步骤S401:基于预设类别数目h,对各训练样本的矩特征向量进行SVM分类器训练,得到h个SVM类别模板,完成第一级特征分类器训练。Step S401: Based on the preset category number h, perform SVM classifier training on the moment feature vectors of each training sample, obtain h SVM category templates, and complete the first-level feature classifier training.
步骤S402:基于预设类别数目h,对各训练样本的峰值特征向量进行峰值匹配分类器训练,得到h个峰值特征类别的匹配模板,完成第二级特征分类器训练。Step S402: Based on the preset number of categories h, perform peak matching classifier training on the peak feature vectors of each training sample to obtain matching templates for h peak feature categories, and complete the second-level feature classifier training.
步骤S403:由第一级、第二级特征分类器的级联得到级联融合分类器。Step S403: Obtain a cascade fusion classifier from the cascade of the first-level and second-level feature classifiers.
步骤S5:基于训练好的级联融合分类器对测试样本进行类别识别处理:Step S5: Perform category recognition processing on the test samples based on the trained cascade fusion classifier:
步骤S501:将测试样本的矩特征向量输入第一级特征分类器进行初级分类识别,则第一级特征分类器输出的测试样本属于各类别的后验概率集合为Pset={p1,p2,...,ph},因此各测试样本的分类置信度可表示为:Step S501: Input the moment feature vector of the test sample into the first-level feature classifier for primary classification recognition, then the test sample output by the first-level feature classifier belongs to the posterior probability set of each category as Pset ={p1 ,p2 ,...,ph }, so the classification confidence of each test sample can be expressed as:
其中,K为任意测试样本标识符,为集合Pset中除之外其他元素构成的集合。Among them, K is any test sample identifier, In addition to the set Pset A collection of elements other than .
对于测试样本K而言,若其分类置信度conf(K)大于给定的阈值,则直接输出第一级特征分类器的识别结果,即后验概率集合为Pset={p1,p2,...,ph}中取值最大的后验概率所对应的类别标识为当前测试样本的识别结果。For the test sample K, if its classification confidence conf(K) is greater than the given threshold, the recognition result of the first-level feature classifier will be directly output, that is, the posterior probability set is Pset ={p1 ,p2 ,...,ph }, the class identification corresponding to the largest posterior probability is the recognition result of the current test sample.
否则,继续进入到第二级特征分类器的二次分类识别,即执行步骤S502。Otherwise, proceed to the secondary classification recognition of the second-level feature classifier, that is, execute step S502.
步骤S502:将测试样本的峰值特征向量输入第二级特征分类器,并将第一级特征分类器的后验概率集合当作峰值特征匹配识别过程中先验信息的输入。Step S502: Input the peak feature vector of the test sample into the second-level feature classifier, and use the posterior probability set of the first-level feature classifier as the input of prior information in the process of peak feature matching and identification.
为增加所提取的峰值特征相似度之间的差异性,对选择的测试样本与峰值类别匹配模板之间的相似度集合设为G={g1,g2,...,gh}。In order to increase the difference between the extracted peak feature similarities, the similarity set between the selected test sample and the peak category matching template is set to G={g1 ,g2 ,...,gh }.
将峰值匹配分类器输出的目标相似度gi进行增大变换,则峰值特征gi变换后的相似度g′i为:The target similarity gi output by the peak matching classifier is increased and transformed, then the similarity g′i after the peak feature gi transformation is:
其中,k为峰值特征相似度集合G中互斥特征相似度标号。Among them, k is the label of mutually exclusive feature similarity in peak feature similarity set G.
步骤S503:将SVM分类器的后验概率pi作为第一级特征分类器的识别度量值,将峰值匹配分类器的相似度g′i作为第二级特征分类器的识别度量值,取两级识别度量值之和得到级联融合分类器的分类识别度量值Di:Step S503: Take the posterior probability pi of the SVM classifier as the recognition measure value of the first-level feature classifier, and use the similarity g'i of the peak matching classifier as the recognition measure value of the second-level feature classifier, and take two The sum of class recognition metrics can be used to obtain the classification recognition metrics Di of the cascade fusion classifier:
Di=pi+g′i (9)Di =pi +g'i (9)
判决h个Di中的最大值所对应的类别为当前测试样本所属类别。It is determined that the category corresponding to the maximum value among the h Di is the category to which the current test sample belongs.
综上,本发明实施例的一种多源特征融合的SAR图像自动目标识别方法在目标存在各种姿态变化时,仍能保证较高的识别率。To sum up, the multi-source feature fusion SAR image automatic target recognition method of the embodiment of the present invention can still ensure a high recognition rate when the target has various posture changes.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710312180.0ACN107239740B (en) | 2017-05-05 | 2017-05-05 | A kind of SAR image automatic target recognition method of multi-source Fusion Features |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710312180.0ACN107239740B (en) | 2017-05-05 | 2017-05-05 | A kind of SAR image automatic target recognition method of multi-source Fusion Features |
| Publication Number | Publication Date |
|---|---|
| CN107239740A CN107239740A (en) | 2017-10-10 |
| CN107239740Btrue CN107239740B (en) | 2019-11-05 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201710312180.0AExpired - Fee RelatedCN107239740B (en) | 2017-05-05 | 2017-05-05 | A kind of SAR image automatic target recognition method of multi-source Fusion Features |
| Country | Link |
|---|---|
| CN (1) | CN107239740B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108830242A (en)* | 2018-06-22 | 2018-11-16 | 北京航空航天大学 | SAR image targets in ocean classification and Detection method based on convolutional neural networks |
| CN109409286A (en)* | 2018-10-25 | 2019-03-01 | 哈尔滨工程大学 | Ship target detection method based on the enhancing training of pseudo- sample |
| CN109816634B (en)* | 2018-12-29 | 2023-07-11 | 歌尔股份有限公司 | Detection method, model training method, device and equipment |
| CN110148146B (en)* | 2019-05-24 | 2021-03-02 | 重庆大学 | A method and system for plant leaf segmentation using synthetic data |
| CN110210403B (en)* | 2019-06-04 | 2022-10-14 | 电子科技大学 | SAR image target identification method based on feature construction |
| CN112070151B (en)* | 2020-09-07 | 2023-12-29 | 北京环境特性研究所 | Target classification and identification method for MSTAR data image |
| CN112800980B (en)* | 2021-02-01 | 2021-12-07 | 南京航空航天大学 | SAR target recognition method based on multi-level features |
| CN113534120B (en)* | 2021-07-14 | 2023-06-30 | 浙江大学 | Multi-target constant false alarm rate detection method based on deep neural network |
| CN113743481B (en)* | 2021-08-20 | 2024-04-16 | 北京电信规划设计院有限公司 | Human-like image recognition method and system |
| CN113591804B (en)* | 2021-09-27 | 2022-02-22 | 阿里巴巴达摩院(杭州)科技有限公司 | Image feature extraction method, computer-readable storage medium, and computer terminal |
| CN114612766B (en)* | 2022-03-03 | 2025-04-08 | 湘潭大学 | An automatic ghost detection algorithm for SAR images |
| CN114782480B (en)* | 2022-03-19 | 2024-04-09 | 中国电波传播研究所(中国电子科技集团公司第二十二研究所) | Automatic extraction method for vehicle targets in SAR image |
| CN114627089B (en)* | 2022-03-21 | 2024-11-01 | 成都数之联科技股份有限公司 | Defect identification method, defect identification device, computer equipment and computer readable storage medium |
| CN115034257B (en)* | 2022-05-09 | 2023-04-07 | 西北工业大学 | Cross-modal information target identification method and device based on feature fusion |
| CN116701990A (en)* | 2023-05-10 | 2023-09-05 | 深圳数联天下智能科技有限公司 | Daily behavior detection method, device, equipment and computer storage medium |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102081791A (en)* | 2010-11-25 | 2011-06-01 | 西北工业大学 | SAR (Synthetic Aperture Radar) image segmentation method based on multi-scale feature fusion |
| CN104331707A (en)* | 2014-11-02 | 2015-02-04 | 西安电子科技大学 | Polarized SAR (synthetic aperture radar) image classification method based on depth PCA (principal component analysis) network and SVM (support vector machine) |
| CN104680183A (en)* | 2015-03-14 | 2015-06-03 | 西安电子科技大学 | SAR Target Discrimination Method Based on Scattering Points and K-Center Classifiers |
| CN105842694A (en)* | 2016-03-23 | 2016-08-10 | 中国电子科技集团公司第三十八研究所 | FFBP SAR imaging-based autofocus method |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102081791A (en)* | 2010-11-25 | 2011-06-01 | 西北工业大学 | SAR (Synthetic Aperture Radar) image segmentation method based on multi-scale feature fusion |
| CN104331707A (en)* | 2014-11-02 | 2015-02-04 | 西安电子科技大学 | Polarized SAR (synthetic aperture radar) image classification method based on depth PCA (principal component analysis) network and SVM (support vector machine) |
| CN104680183A (en)* | 2015-03-14 | 2015-06-03 | 西安电子科技大学 | SAR Target Discrimination Method Based on Scattering Points and K-Center Classifiers |
| CN105842694A (en)* | 2016-03-23 | 2016-08-10 | 中国电子科技集团公司第三十八研究所 | FFBP SAR imaging-based autofocus method |
| Title |
|---|
| 三维模型在SAR图像自动目标识别中的应用;李长军 等;《2016年航空科学与技术全国博士生学术论坛摘要集》;20161125;全文* |
| Publication number | Publication date |
|---|---|
| CN107239740A (en) | 2017-10-10 |
| Publication | Publication Date | Title |
|---|---|---|
| CN107239740B (en) | A kind of SAR image automatic target recognition method of multi-source Fusion Features | |
| CN106709950B (en) | Binocular vision-based inspection robot obstacle crossing wire positioning method | |
| JP5845365B2 (en) | Improvements in or related to 3D proximity interaction | |
| Zhu et al. | Single image 3d object detection and pose estimation for grasping | |
| CN105574527B (en) | A kind of quick object detecting method based on local feature learning | |
| Guo et al. | An integrated framework for 3-D modeling, object detection, and pose estimation from point-clouds | |
| CN101980250B (en) | Method for identifying target based on dimension reduction local feature descriptor and hidden conditional random field | |
| CN103727930B (en) | A kind of laser range finder based on edge matching and camera relative pose scaling method | |
| CN103473571B (en) | Human detection method | |
| CN104156693B (en) | A kind of action identification method based on the fusion of multi-modal sequence | |
| CN104123529B (en) | human hand detection method and system | |
| CN105701820A (en) | Point cloud registration method based on matching area | |
| CN106447704A (en) | A visible light-infrared image registration method based on salient region features and edge degree | |
| CN102737235A (en) | Head posture estimation method based on depth information and color image | |
| CN103177468A (en) | Three-dimensional motion object augmented reality registration method based on no marks | |
| CN102760228B (en) | Specimen-based automatic lepidoptera insect species identification method | |
| CN105488541A (en) | Natural feature point identification method based on machine learning in augmented reality system | |
| CN105279522A (en) | Scene object real-time registering method based on SIFT | |
| Zhang et al. | Pedestrian detection with EDGE features of color image and HOG on depth images | |
| CN113313725B (en) | Bung hole identification method and system for energetic material medicine barrel | |
| KR101528757B1 (en) | Texture-less object recognition using contour fragment-based features with bisected local regions | |
| Hsu | A hybrid approach for brain image registration with local constraints | |
| Bui et al. | A texture-based local soft voting method for vanishing point detection from a single road image | |
| Fan et al. | Lane detection based on machine learning algorithm | |
| KR101184588B1 (en) | A method and apparatus for contour-based object category recognition robust to viewpoint changes |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| CF01 | Termination of patent right due to non-payment of annual fee | ||
| CF01 | Termination of patent right due to non-payment of annual fee | Granted publication date:20191105 |